I would like to know if in television broadcast high end audio frequencies are transmitted e.g. frequencies higher than 14 kHz. What is the cut off audio frequency of transmission in television broadcast?
I am preparing a report (for research) characterizing audio frequency response of various devices e.g. iPhones, Samsung Galaxies, various televisions. While doing this, I got curious about the spectrum output of television. Given a piece of Audio for broadcast transmission, how it's spectrum get altered? At what frequency there is cutoff and specially can the cutoff be below ~20K, the human audible range?
Broadly my interest is in knowing what happens to an audio given to a broadcasting channel and by the time it arrives as an input to a television.
Thanks.
Answer
In digital television, I think they use mostly a 48 kHz sampling frequency, so it is up to whether the audio encoder allocates any bits to the ultrasonic frequencies:

Figure 1. Before encoding: Left) almost full scale white noise, Center) almost full scale pink noise, Right) full scale 20-24 kHz band pass white noise.

Figure 2. After encoding in MPEG 1.0 layer II, 224 kbit/s, 48000 Hz stereo (known to be used in DVB-T) and decoding, using mp2enc for encoding and mpg123 for decoding.
With white noise, in the 16-20 kHz range only some bursts of frequency survive, but above that nothing. Starting with "pink" noise, everything above 15 kHz gets stripped away, because more bits are allocated to the higher-intensity lower frequencies. Starting with content only >20 kHz, some ultrasonic frequencies survive, as no bits need to be allocated to lower frequencies.
Also other encoding methods than MPEG 1.0 layer II are in use in broadcast television.
No comments:
Post a Comment