Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>You could be very well missing some important specifications in the AudioStreamBasicDescription structure: i don't know about ffmpeg, but specifying zero bytes per frame and zero bytes per packet won't work ;) Here is how i would fill the structure, given the <strong>samplerate</strong>, the <strong>audio format</strong>, the <strong>number of channels</strong> and the <strong>bits per sample</strong>:</p> <pre><code>iAqc.mDataFormat.mSampleRate = iSampleRate; iAqc.mDataFormat.mFormatID = kAudioFormatLinearPCM; iAqc.mDataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; iAqc.mDataFormat.mBytesPerPacket = (iBitsPerSample &gt;&gt; 3) * iNumChannels; iAqc.mDataFormat.mFramesPerPacket = 1; iAqc.mDataFormat.mBytesPerFrame = (iBitsPerSample &gt;&gt; 3) * iNumChannels; iAqc.mDataFormat.mChannelsPerFrame = iNumChannels; iAqc.mDataFormat.mBitsPerChannel = iBitsPerSample; </code></pre> <p>I assume here you are writing PCM samples to the audio device. As long as you know the audio format you are working with, there should be no problems adapting it: the important thing to remember is what all this stuff mean. Here i'm working with one sample frame per packet, so the number of bytes per packet coincides with the number of bytes per sample frame.</p> <p>Most of the problems come out because there is a lot of bad usage of words such as "samples", "sample frames" in the wrong contexts and so on: a sample frame can be thought as the atomic unit of audio data that embrace all the available channels, a sample refers to a single sub-unit of data composing the sample frame.</p> <p>For example, you have an audio stream of 2 channels with a resolution of 16 bits per sample: a sample will be 2 bytes big (16bps/8 or 16 >> 3), the sample frame will also take the number of channels into account, so it will be 4 bytes big (2bytes x 2channels).</p> <p><strong>IMPORTANT</strong> The theory behind this doesn't apply only to the iPhone, but to audio coding in general! It just happens the AudioQueues ask you for well-defined specifications about your audio stream, and that's good, but you could be asked for bytes instead, so expressing audio data sizes as audio frames is always good, you can always convert your data sizes and be sure about it.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload