通过套接字连接在iOS上播放音频

阿曼德

希望您能在这个问题上为我提供帮助,我已经看到很多与此相关的问题,但是没有一个真正帮助我弄清楚我在这里做错了什么。

因此,在Android上,我有一个AudioRecord,它可以记录音频并将音频作为字节数组通过套接字连接发送到客户端。这部分在Android上超级简单,并且运行良好。

当我开始使用iOS时,我发现没有简单的方法可以解决这个问题,因此经过2天的研究,即插即用,这就是我得到的。仍然不播放任何音频。启动时会发出声音,但没有播放通过套接字传输的音频。我通过记录缓冲区数组中的每个元素来确认套接字正在接收数据。

这是我正在使用的所有代码,很多站点都重复使用了这些代码,忘记了所有链接。(顺便说一句,使用AudioUnits)

首先,音频处理器:播放回调

static OSStatus playbackCallback(void *inRefCon,
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData) {

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (__bridge AudioProcessor*) inRefCon;

    // iterate over incoming stream an copy to output stream
    for (int i=0; i < ioData->mNumberBuffers; i++) {
        AudioBuffer buffer = ioData->mBuffers[i];

        // find minimum size
        UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

        // copy buffer to audio buffer which gets played after function return
        memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

        // set data size
        buffer.mDataByteSize = size;
    }
    return noErr;
}

音频处理器初始化

-(void)initializeAudio
{
    OSStatus status;

    // We define the audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output; // we want to ouput
    desc.componentSubType = kAudioUnitSubType_RemoteIO; // we want in and ouput
    desc.componentFlags = 0; // must be zero
    desc.componentFlagsMask = 0; // must be zero
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; // select provider

    // find the AU component by description
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // create audio unit by component
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);

    [self hasError:status:__FILE__:__LINE__];

    // define that we want record io on the input bus
    UInt32 flag = 1;


    // define that we want play on io on the output bus
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO, // use io
                                  kAudioUnitScope_Output, // scope to output
                                  kOutputBus, // select output bus (0)
                                  &flag, // set flag
                                  sizeof(flag));
    [self hasError:status:__FILE__:__LINE__];

    /*
     We need to specifie our format on which we want to work.
     We use Linear PCM cause its uncompressed and we work on raw data.
     for more informations check.

     We want 16 bits, 2 bytes per packet/frames at 44khz
     */
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate         = SAMPLE_RATE;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 2;
    audioFormat.mBytesPerFrame      = 2;

    // set the format on the output stream
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));

    [self hasError:status:__FILE__:__LINE__];



    /**
     We need to define a callback structure which holds
     a pointer to the recordingCallback and a reference to
     the audio processor object
     */
    AURenderCallbackStruct callbackStruct;

    /*
     We do the same on the output stream to hear what is coming
     from the input stream
     */
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = (__bridge void *)(self);

    // set playbackCallback as callback on our renderer for the output bus
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));

    [self hasError:status:__FILE__:__LINE__];

    // reset flag to 0
    flag = 0;

    /*
     we need to tell the audio unit to allocate the render buffer,
     that we can directly write into it.
     */
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_ShouldAllocateBuffer,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));

    /*
     we set the number of channels to mono and allocate our block size to
     1024 bytes.
     */
    audioBuffer.mNumberChannels = 1;
    audioBuffer.mDataByteSize = 512 * 2;
    audioBuffer.mData = malloc( 512 * 2 );

    // Initialize the Audio Unit and cross fingers =)
    status = AudioUnitInitialize(audioUnit);
    [self hasError:status:__FILE__:__LINE__];

    NSLog(@"Started");

}

开始玩

-(void)start;
{
    // start the audio unit. You should hear something, hopefully <img src="http://www.stefanpopp.de/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley">
    OSStatus status = AudioOutputUnitStart(audioUnit);
    [self hasError:status:__FILE__:__LINE__];
}

将数据添加到缓冲区

-(void)processBuffer: (AudioBufferList*) audioBufferList
{
    AudioBuffer sourceBuffer = audioBufferList->mBuffers[0];

    // we check here if the input data byte size has changed
    if (audioBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
        // clear old buffer
        free(audioBuffer.mData);
        // assing new byte size and allocate them on mData
        audioBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        audioBuffer.mData = malloc(sourceBuffer.mDataByteSize);
    }
    // loop over every packet
    // copy incoming audio data to the audio buffer
    memcpy(audioBuffer.mData, audioBufferList->mBuffers[0].mData, audioBufferList->mBuffers[0].mDataByteSize);
}

流连接回调(套接字)

-(void)stream:(NSStream *)aStream handleEvent:(NSStreamEvent)eventCode
{
    if(eventCode == NSStreamEventHasBytesAvailable)
    {
        if(aStream == inputStream) {
            uint8_t buffer[1024];
            UInt32 len;
            while ([inputStream hasBytesAvailable]) {
                len = (UInt32)[inputStream read:buffer maxLength:sizeof(buffer)];
                if(len > 0)
                {
                    AudioBuffer abuffer;

                    abuffer.mDataByteSize = len; // sample size
                    abuffer.mNumberChannels = 1; // one channel
                    abuffer.mData = buffer;

                    int16_t audioBuffer[len];

                    for(int i = 0; i <= len; i++)
                    {
                        audioBuffer[i] = MuLaw_Decode(buffer[i]);
                    }

                    AudioBufferList bufferList;
                    bufferList.mNumberBuffers = 1;
                    bufferList.mBuffers[0] = abuffer;

                    NSLog(@"%", bufferList.mBuffers[0]);

                    [audioProcessor processBuffer:&bufferList];
                }
            }
        }
    }
}

MuLaw_Decode

#define MULAW_BIAS 33
int16_t MuLaw_Decode(uint8_t number)
{
    uint8_t sign = 0, position = 0;
    int16_t decoded = 0;
    number =~ number;
    if(number&0x80)
    {
        number&=~(1<<7);
        sign = -1;
    }
    position= ((number & 0xF0) >> 4) + 5;
    decoded = ((1<<position) | ((number&0x0F) << (position - 4)) |(1<<(position-5))) - MULAW_BIAS;
    return (sign == 0) ? decoded : (-(decoded));
}

和打开连接并初始化音频处理器的代码

CFReadStreamRef readStream;
CFWriteStreamRef writeStream;



CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)@"10.0.0.14", 6000, &readStream, &writeStream);


inputStream = (__bridge_transfer NSInputStream *)readStream;
outputStream = (__bridge_transfer NSOutputStream *)writeStream;

[inputStream setDelegate:self];
[outputStream setDelegate:self];

[inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
[inputStream open];
[outputStream open];


audioProcessor = [[AudioProcessor alloc] init];
[audioProcessor start];
[audioProcessor setGain:1];

我相信我代码中的问题与套接字连接回调有关,我没有对数据做正确的事。

阿曼德

我最终解决了这个问题,请在这里查看答案

我打算将代码放在这里,但这会很多复制粘贴

本文收集自互联网,转载请注明来源。

如有侵权,请联系[email protected] 删除。

编辑于
0

我来说两句

0条评论
登录后参与评论

相关文章

来自分类Dev

通过套接字连接在iOS上播放音频

来自分类Dev

iOS:通过蓝牙播放音频

来自分类Dev

iOS播放音频服务

来自分类Dev

是否可以同时通过蓝牙和 AUX 播放音频,同时在每个上播放不同的音频?

来自分类Dev

套接字IO连接在Android上失败

来自分类Dev

使用Phonegap在Android上无法播放音频,但在iOS上可以正常播放

来自分类Dev

Gstreamer无法通过rtspsrc播放音频

来自分类Dev

cefsharp无法通过javascript播放音频

来自分类Dev

iOS:从smb播放音频/视频

来自分类Dev

iOS:从smb播放音频/视频

来自分类Dev

如何在NSDatePicker上播放音频?

来自分类Dev

Text2Speech 错误,如何通过直接在浏览器中输入 URL 来播放音频?

来自分类Dev

通过Android上的TCP套接字进行音频流

来自分类Dev

套接字上的音频c ++

来自分类Dev

如何通过在React上使用onKeyDown单击特定按钮来播放音频?

来自分类Dev

iOS在模拟器上成功播放音频,但在设备上失败

来自分类Dev

iOS在模拟器上成功播放音频,但在设备上失败

来自分类Dev

点击播放音频

来自分类Dev

颤动播放音频

来自分类Dev

依次播放音频

来自分类Dev

点击播放音频

来自分类Dev

跳播放音频

来自分类Dev

通过套接字的iOS连接无法正常工作

来自分类Dev

无法在iOS上通过JavaScript播放HTML音频元素

来自分类Dev

播放音频并播放更多

来自分类Dev

互联网套接字连接在 android 上停止应用程序

来自分类Dev

通过Ajax从Rails Controller播放音频(mp3)

来自分类Dev

Java通过命令行播放音频文件

来自分类Dev

通过控制台插入时如何播放音频?