Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>I don't think you need to enable input on the I/O unit. I would also comment out the format and sample rate configuration that you're doing on the I/O unit until you get your callback running, because a mismatched or unsupported format can prevent the audio units from being linked together.</p> <p>To add the callback, try this method:</p> <pre><code>AudioUnitAddRenderNotify( iOUnit, &amp;recordingCallback, self ); </code></pre> <p>Apparently the other methods will replace the node connection, but this method will not -- so your audio units can stay connected even though you've added a callback.</p> <p>Once your callback is running, if you find that there's no data in the buffers (ioData), wrap this code around your callback code:</p> <pre><code>if (*ioActionFlags == kAudioUnitRenderAction_PostRender) { // your code } </code></pre> <p>This is needed because a callback added in this way runs both before and after the audio unit renders its audio, but you just want to run your code after it renders.</p> <p>Once the callback is running, the next step is to figure out what audio format it's receiving and handle it appropriately. Try adding this to your callback:</p> <pre><code>SInt16 *dataLeftChannel = (SInt16 *)ioData-&gt;mBuffers[0].mData; for (UInt32 frameNumber = 0; frameNumber &lt; inNumberFrames; ++frameNumber) { NSLog(@"sample %lu: %d", frameNumber, dataLeftChannel[frameNumber]); } </code></pre> <p>This will slow your app so much that it will probably prevent any audio from actually playing, but you should be able to run it long enough to see what the samples look like. If the callback is receiving 16-bit audio, the samples should be positive or negative integers between -32000 and 32000. If the samples alternate between a normal-looking number and a much smaller number, try this code in your callback instead:</p> <pre><code>SInt32 *dataLeftChannel = (SInt32 *)ioData-&gt;mBuffers[0].mData; for (UInt32 frameNumber = 0; frameNumber &lt; inNumberFrames; ++frameNumber) { NSLog(@"sample %lu: %ld", frameNumber, dataLeftChannel[frameNumber]); } </code></pre> <p>This should show you the complete 8.24 samples.</p> <p>If you can save the data in the format the callback is receiving, then you should have what you need. If you need to save it in a different format, you should be able to convert the format in the Remote I/O audio unit ... but I <a href="https://devforums.apple.com/message/516994#516994" rel="noreferrer">haven't been able to figure out how to do that</a> when it's connected to a Multichannel Mixer unit. As an alternative, you can convert the data using the <a href="http://developer.apple.com/library/ios/#documentation/MusicAudio/Reference/AudioConverterServicesReference/Reference/reference.html" rel="noreferrer">Audio Converter Services</a>. First, define the input and output formats:</p> <pre><code>AudioStreamBasicDescription monoCanonicalFormat; size_t bytesPerSample = sizeof (AudioUnitSampleType); monoCanonicalFormat.mFormatID = kAudioFormatLinearPCM; monoCanonicalFormat.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical; monoCanonicalFormat.mBytesPerPacket = bytesPerSample; monoCanonicalFormat.mFramesPerPacket = 1; monoCanonicalFormat.mBytesPerFrame = bytesPerSample; monoCanonicalFormat.mChannelsPerFrame = 1; monoCanonicalFormat.mBitsPerChannel = 8 * bytesPerSample; monoCanonicalFormat.mSampleRate = graphSampleRate; AudioStreamBasicDescription mono16Format; bytesPerSample = sizeof (SInt16); mono16Format.mFormatID = kAudioFormatLinearPCM; mono16Format.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; mono16Format.mChannelsPerFrame = 1; mono16Format.mSampleRate = graphSampleRate; mono16Format.mBitsPerChannel = 16; mono16Format.mFramesPerPacket = 1; mono16Format.mBytesPerPacket = 2; mono16Format.mBytesPerFrame = 2; </code></pre> <p>Then define a converter somewhere outside your callback, and create a temporary buffer for handling the data during conversion:</p> <pre><code>AudioConverterRef formatConverterCanonicalTo16; @property AudioConverterRef formatConverterCanonicalTo16; @synthesize AudioConverterRef; AudioConverterNew( &amp;monoCanonicalFormat, &amp;mono16Format, &amp;formatConverterCanonicalTo16 ); SInt16 *data16; @property (readwrite) SInt16 *data16; @synthesize data16; data16 = malloc(sizeof(SInt16) * 4096); </code></pre> <p>Then add this to your callback, before you save your data:</p> <pre><code>UInt32 dataSizeCanonical = ioData-&gt;mBuffers[0].mDataByteSize; SInt32 *dataCanonical = (SInt32 *)ioData-&gt;mBuffers[0].mData; UInt32 dataSize16 = dataSizeCanonical; AudioConverterConvertBuffer( effectState-&gt;formatConverterCanonicalTo16, dataSizeCanonical, dataCanonical, &amp;dataSize16, effectState-&gt;data16 ); </code></pre> <p>Then you can save data16, which is in 16-bit format and might be what you want saved in your file. It will be more compatible and half as large as the canonical data.</p> <p>When you're done, you can clean up a couple things:</p> <pre><code>AudioConverterDispose(formatConverterCanonicalTo16); free(data16); </code></pre>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload