2011-05-09 86 views
15

是否可以將音頻輸出重定向到手機揚聲器並仍然使用麥克風耳機輸入?將音頻輸出重定向到手機揚聲器和麥克風輸入到耳機

如果我將音頻路由重定向到手機揚聲器而不是耳機,它也會重定向麥克風。這是有道理的,但我似乎無法只是重定向麥克風輸入?有任何想法嗎?

這裏是我使用重定向音頻揚聲器代碼:

UInt32 doChangeDefaultRoute = true;   
propertySetError = AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(doChangeDefaultRoute), &doChangeDefaultRoute); 
NSAssert(propertySetError == 0, @"Failed to set audio session property: OverrideCategoryDefaultToSpeaker"); 
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; 
AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute,sizeof (audioRouteOverride),&audioRouteOverride); 
+1

您可能知道這一點,但是當您啓用揚聲器時,您還可以在手機上啓用麥克風。但是,在iPod上,如果啓用揚聲器,您仍然可以從耳機獲得麥克風聲音。由於iPod上沒有麥克風,故意使用。 我簡單地得到了一個ios 4.3 sdk應用程序,通過在路由更改後再次初始化AUgraph,從耳機和揚聲器輸出中獲取麥克風,但它間歇性地發生,現在根本不會發生(ios 4.3+ xcode 4+) – zeAttle 2012-01-10 14:34:29

回答

4

它看起來並不像它的可能的話,我怕。

Audio Session Programming Guide - kAudioSessionProperty_OverrideAudioRoute

如果耳機的時候插入設置該屬性的值 到kAudioSessionOverrideAudioRoute_Speaker,該系統改變了輸入 音頻路由以及輸出:輸入是來自 的內置麥克風;輸出轉到內置揚聲器。

this question

+0

它可能值得跟Tommy在[這個問題](http://stackoverflow.com/questions/4002133/forcing-iphone-microphone-as-audio-input)中所做的相同步驟來找到可用的'AVCaptureDevice's – 2012-01-10 16:09:09

6

可能的複製這是可能的,但它是挑剔你如何設置它。

[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord error:nil]; 
UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; 
AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride); 

它使用AVAudioSessionCategoryPlayAndRecord是非常重要或者路由將無法進入揚聲器。一旦爲音頻會話設置了覆蓋路由,您可以使用AVAudioPlayer實例並向揚聲器發送一些輸出。

希望能像其他人一樣爲我工作。關於此的文檔很分散,但Skype應用程序證明這是可能的。堅持,我的朋友們! :)

一些蘋果文檔的位置:http://developer.apple.com/library/ios/#documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html

執行頁面上的搜索kAudioSessionProperty_OverrideAudioRoute

2

你可以做的是迫使音頻輸出的揚聲器在任何情況下:

UI Hacker - iOS: Force audio output to speakers while headphones are plugged in

@interface AudioRouter : NSObject 

+ (void) initAudioSessionRouting; 
+ (void) switchToDefaultHardware; 
+ (void) forceOutputToBuiltInSpeakers; 

@end 

and

#import "AudioRouter.h" 
#import <AudioToolbox/AudioToolbox.h> 
#import <AVFoundation/AVFoundation.h> 

@implementation AudioRouter 

#define IS_DEBUGGING NO 
#define IS_DEBUGGING_EXTRA_INFO NO 

+ (void) initAudioSessionRouting { 

    // Called once to route all audio through speakers, even if something's plugged into the headphone jack 
    static BOOL audioSessionSetup = NO; 
    if (audioSessionSetup == NO) { 

     // set category to accept properties assigned below 
     NSError *sessionError = nil; 
     [[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker error: &sessionError]; 

     // Doubly force audio to come out of speaker 
     UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; 
     AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride); 

     // fix issue with audio interrupting video recording - allow audio to mix on top of other media 
     UInt32 doSetProperty = 1; 
     AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryMixWithOthers, sizeof(doSetProperty), &doSetProperty); 

     // set active 
     [[AVAudioSession sharedInstance] setDelegate:self]; 
     [[AVAudioSession sharedInstance] setActive: YES error: nil]; 

     // add listener for audio input changes 
     AudioSessionAddPropertyListener (kAudioSessionProperty_AudioRouteChange, onAudioRouteChange, nil); 
     AudioSessionAddPropertyListener (kAudioSessionProperty_AudioInputAvailable, onAudioRouteChange, nil); 

    } 

    // Force audio to come out of speaker 
    [[AVAudioSession sharedInstance] overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:nil]; 


    // set flag 
    audioSessionSetup = YES; 
} 

+ (void) switchToDefaultHardware { 
    // Remove forcing to built-in speaker 
    UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_None; 
    AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride); 
} 

+ (void) forceOutputToBuiltInSpeakers { 
    // Re-force audio to come out of speaker 
    UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker; 
    AudioSessionSetProperty (kAudioSessionProperty_OverrideAudioRoute, sizeof(audioRouteOverride), &audioRouteOverride); 


} 

void onAudioRouteChange (void* clientData, AudioSessionPropertyID inID, UInt32 dataSize, const void* inData) { 

    if(IS_DEBUGGING == YES) { 
     NSLog(@"==== Audio Harware Status ===="); 
     NSLog(@"Current Input: %@", [AudioRouter getAudioSessionInput]); 
     NSLog(@"Current Output: %@", [AudioRouter getAudioSessionOutput]); 
     NSLog(@"Current hardware route: %@", [AudioRouter getAudioSessionRoute]); 
     NSLog(@"=============================="); 
    } 

    if(IS_DEBUGGING_EXTRA_INFO == YES) { 
     NSLog(@"==== Audio Harware Status (EXTENDED) ===="); 
     CFDictionaryRef dict = (CFDictionaryRef)inData; 
     CFNumberRef reason = CFDictionaryGetValue(dict, kAudioSession_RouteChangeKey_Reason); 
     CFDictionaryRef oldRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_PreviousRouteDescription); 
     CFDictionaryRef newRoute = CFDictionaryGetValue(dict, kAudioSession_AudioRouteChangeKey_CurrentRouteDescription); 
     NSLog(@"Audio old route: %@", oldRoute); 
     NSLog(@"Audio new route: %@", newRoute); 
     NSLog(@"========================================="); 
    } 



} 

+ (NSString*) getAudioSessionInput { 
    UInt32 routeSize; 
    AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize); 
    CFDictionaryRef desc; // this is the dictionary to contain descriptions 

    // make the call to get the audio description and populate the desc dictionary 
    AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc); 

    // the dictionary contains 2 keys, for input and output. Get output array 
    CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Inputs); 

    // the output array contains 1 element - a dictionary 
    CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0); 

    // get the output description from the dictionary 
    CFStringRef input = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type); 
    return [NSString stringWithFormat:@"%@", input]; 
} 

+ (NSString*) getAudioSessionOutput { 
    UInt32 routeSize; 
    AudioSessionGetPropertySize(kAudioSessionProperty_AudioRouteDescription, &routeSize); 
    CFDictionaryRef desc; // this is the dictionary to contain descriptions 

    // make the call to get the audio description and populate the desc dictionary 
    AudioSessionGetProperty (kAudioSessionProperty_AudioRouteDescription, &routeSize, &desc); 

    // the dictionary contains 2 keys, for input and output. Get output array 
    CFArrayRef outputs = CFDictionaryGetValue(desc, kAudioSession_AudioRouteKey_Outputs); 

    // the output array contains 1 element - a dictionary 
    CFDictionaryRef diction = CFArrayGetValueAtIndex(outputs, 0); 

    // get the output description from the dictionary 
    CFStringRef output = CFDictionaryGetValue(diction, kAudioSession_AudioRouteKey_Type); 
    return [NSString stringWithFormat:@"%@", output]; 
} 

+ (NSString*) getAudioSessionRoute { 
    /* 
    returns the current session route: 
    * ReceiverAndMicrophone 
    * HeadsetInOut 
    * Headset 
    * HeadphonesAndMicrophone 
    * Headphone 
    * SpeakerAndMicrophone 
    * Speaker 
    * HeadsetBT 
    * LineInOut 
    * Lineout 
    * Default 
    */ 

    UInt32 rSize = sizeof (CFStringRef); 
    CFStringRef route; 
    AudioSessionGetProperty (kAudioSessionProperty_AudioRoute, &rSize, &route); 

    if (route == NULL) { 
     NSLog(@"Silent switch is currently on"); 
     return @"None"; 
    } 
    return [NSString stringWithFormat:@"%@", route]; 
} 

@end 
相關問題