We have recently proposed a novel, non–intrusive, real– time approach to measuring the quality of an audio (or speech) stream transmitted over a packet network. The proposed approach takes into account the diversity of the factors which affect audio quality, including encoding parameters and network impairments. The goal of this method is to overcome the limitations of the quality assessment techniques currently available in the literature, such as the low correlation with subjective measurements, or the need to access the original signal, which precludes real–time applications. Our approach correlates well with human perception, it is not computationally intensive, does not need to access the original signal, and can work with any set of parameters that affect the perceived quality, including parameters such as FEC, which are usually not taken into account in other methods. It is based on the use of a Random Neural Network (RNN), which is trained to assess audio quality as an average human being. In this paper we compare the performance of the proposed method with that of other assessment techniques found in the literature.