当前位置: 首页 > article >正文

AAOS系列之(七) --- AudioRecord录音逻辑分析(一)

一文讲透AAOS架构,点到为止不藏私
📌 这篇帖子给大家分析下 AudioRecord的初始化

1. 场景介绍:

在 AAOS 的 Framework 开发中,录音模块几乎是每个项目都会涉及的重要组成部分。无论是语音控制、车内对讲(同行者模式),还是集成科大讯飞等语音识别引擎,都高度依赖系统 Framework 层向 App 层提供稳定、可用的录音数据。

在实现录音功能时,有几个关键参数必须严格设置:采样率、通道数和音频格式。其中,采样率尤为关键。如果设置不当,录制下来的音频数据可能无法被正常解码和回放,从而导致语音引擎识别失败,严重影响用户体验。

因此,在平台开发阶段就需要明确并统一这些参数配置,确保整个语音链路的稳定性和兼容性。

APP端录音的基础代码如下:
// 音频获取源private int audioSource = MediaRecorder.AudioSource.MIC;// 设置音频采样率,44100是目前的标准,但是某些设备仍然支持22050,16000,11025private static int sampleRateInHz = 16000;// 设置音频的录制的声道CHANNEL_IN_STEREO为双声道,CHANNEL_CONFIGURATION_MONO为单声道private static int channelConfig = AudioFormat.CHANNEL_IN_STEREO;// 音频数据格式:PCM 16位每个样本。保证设备支持。PCM 8位每个样本。不一定能得到设备支持。private static int audioFormat = AudioFormat.ENCODING_PCM_16BIT;private int bufferSizeInBytes = 0;private AudioRecord audioRecord;private void creatAudioRecord() {// true: 内建APP使用的录音方式; false:三方APP使用的录音方式.boolean isBuild = true ;if (isBuild){ // MIC1+MIC2+REF1+REF2bufferSizeInBytes = 5120;final AudioFormat audioFormat1 = new AudioFormat.Builder().setEncoding(AudioFormat.ENCODING_PCM_16BIT).setSampleRate(sampleRateInHz).setChannelIndexMask(0xf) // 设置了这个参数, 采集的就是4通道的数据.build();audioRecord = new AudioRecord.Builder().setAudioFormat(audioFormat1).build();}else{//channelConfig = AudioFormat.CHANNEL_IN_STEREO | AudioFormat.CHANNEL_IN_FRONT_BACK;// 获得缓冲区字节大小bufferSizeInBytes = AudioRecord.getMinBufferSize(sampleRateInHz,channelConfig, audioFormat);Log.d(TAG, "creatAudioRecord: bufferSizeInBytes = " + bufferSizeInBytes);audioRecord = new AudioRecord(audioSource, sampleRateInHz,channelConfig, audioFormat, bufferSizeInBytes);}

调用到AudioRecord.java的构造方法,常规APP,比如Hicar,微信, 都是通过标准的API 来调用录音的接口:

//channelConfig = AudioFormat.CHANNEL_IN_STEREO | AudioFormat.CHANNEL_IN_FRONT_BACK;
// 获得缓冲区字节大小
bufferSizeInBytes = AudioRecord.getMinBufferSize(sampleRateInHz,channelConfig, audioFormat);
Log.d(TAG, "creatAudioRecord: bufferSizeInBytes = " + bufferSizeInBytes);
// 调用AudioRecord的构造方法.
audioRecord = new AudioRecord(audioSource, sampleRateInHz,channelConfig, audioFormat, bufferSizeInBytes);

接下来, 我们来看下AudioRecord.java这个类的方法:

1.调用AudioRecord 的构造方法:
public class AudioRecord implements AudioRouting, MicrophoneDirection,AudioRecordingMonitor, AudioRecordingMonitorClient
{
// 标准的APP接口
public AudioRecord(int audioSource, int sampleRateInHz, int channelConfig, int audioFormat,int bufferSizeInBytes)throws IllegalArgumentException {this((new AudioAttributes.Builder()).setInternalCapturePreset(audioSource).build(),(new AudioFormat.Builder())// 2.这里有个将channelConfig 转换成ChannelMask的流程..setChannelMask(getChannelMaskFromLegacyConfig(channelConfig,true/*allow legacy configurations*/)).setEncoding(audioFormat).setSampleRate(sampleRateInHz).build(),bufferSizeInBytes,AudioManager.AUDIO_SESSION_ID_GENERATE);}}
2.调用getChannelMaskFromLegacyConfig(),将channelConfig转换成ChannelMask:
// 调用这个方法, 我们传入的参数是:CHANNEL_IN_STEREO = 12;
private static int getChannelMaskFromLegacyConfig(int inChannelConfig,boolean allowLegacyConfig) {int mask;switch (inChannelConfig) {case AudioFormat.CHANNEL_IN_DEFAULT: // AudioFormat.CHANNEL_CONFIGURATION_DEFAULTcase AudioFormat.CHANNEL_IN_MONO:case AudioFormat.CHANNEL_CONFIGURATION_MONO:mask = AudioFormat.CHANNEL_IN_MONO;break;// 传入立体声通道,设置mask为CHANNEL_IN_STEREO,没有改变case AudioFormat.CHANNEL_IN_STEREO:case AudioFormat.CHANNEL_CONFIGURATION_STEREO:mask = AudioFormat.CHANNEL_IN_STEREO;break;case (AudioFormat.CHANNEL_IN_FRONT | AudioFormat.CHANNEL_IN_BACK):mask = inChannelConfig;break;default:throw new IllegalArgumentException("Unsupported channel configuration.");}if (!allowLegacyConfig && ((inChannelConfig == AudioFormat.CHANNEL_CONFIGURATION_MONO)|| (inChannelConfig == AudioFormat.CHANNEL_CONFIGURATION_STEREO))) {// only happens with the constructor that uses AudioAttributes and AudioFormatthrow new IllegalArgumentException("Unsupported deprecated configuration.");}// 返回mask, 这里的值就是CHANNEL_IN_STEREO = 12return mask;}

3.调用setChannelMask(),把计算后的mask值设置给AudioRecorder:

这个方法没有什么复杂的逻辑,只是把channelMask直接保存到成员变量mChannelMask

 public @NonNull Builder setChannelMask(int channelMask) {if (channelMask == CHANNEL_INVALID) {throw new IllegalArgumentException("Invalid zero channel mask");} else if (/* channelMask != 0 && */ mChannelIndexMask != 0 &&Integer.bitCount(channelMask) != Integer.bitCount(mChannelIndexMask)) {throw new IllegalArgumentException("Mismatched channel count for mask " +Integer.toHexString(channelMask).toUpperCase());}// 把传入的channelMask保存到mChannelMask mChannelMask = channelMask;// 给mPropertySetMask 赋值, 后面会用到mPropertySetMask |= AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK;return this;}
4.调用build()方法,把传入的参数封装成AuidoFormat:
public AudioFormat build() {AudioFormat af = new AudioFormat(mPropertySetMask,mEncoding,mSampleRate,mChannelMask,mChannelIndexMask);return af;}
5.创建好了AudioFormat ,根据它来构建AudioRecorder:
// 这里的AudioFormat 就是通过build()方法构建,里面封装了录音的参数
public AudioRecord(AudioAttributes attributes, AudioFormat format, int bufferSizeInBytes,int sessionId) throws IllegalArgumentException {mRecordingState = RECORDSTATE_STOPPED;if (attributes == null) {throw new IllegalArgumentException("Illegal null AudioAttributes");}if (format == null) {throw new IllegalArgumentException("Illegal null AudioFormat");}// remember which looper is associated with the AudioRecord instanciationif ((mInitializationLooper = Looper.myLooper()) == null) {mInitializationLooper = Looper.getMainLooper();}// is this AudioRecord using REMOTE_SUBMIX at full volume?if (attributes.getCapturePreset() == MediaRecorder.AudioSource.REMOTE_SUBMIX) {final AudioAttributes.Builder filteredAttr = new AudioAttributes.Builder();final Iterator<String> tagsIter = attributes.getTags().iterator();while (tagsIter.hasNext()) {final String tag = tagsIter.next();if (tag.equalsIgnoreCase(SUBMIX_FIXED_VOLUME)) {mIsSubmixFullVolume = true;Log.v(TAG, "Will record from REMOTE_SUBMIX at full fixed volume");} else { // SUBMIX_FIXED_VOLUME: is not to be propagated to the native layersfilteredAttr.addTag(tag);}}filteredAttr.setInternalCapturePreset(attributes.getCapturePreset());mAudioAttributes = filteredAttr.build();} else {mAudioAttributes = attributes;}// 从format中读取采样率, 这里我们传入的是16看int rate = format.getSampleRate();if (rate == AudioFormat.SAMPLE_RATE_UNSPECIFIED) {rate = 0;}// int encoding = AudioFormat.ENCODING_DEFAULT;if ((format.getPropertySetMask() & AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_ENCODING) != 0){encoding = format.getEncoding();}audioParamCheck(attributes.getCapturePreset(), rate, encoding);if ((format.getPropertySetMask()& AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_INDEX_MASK) != 0) {mChannelIndexMask = format.getChannelIndexMask();mChannelCount = format.getChannelCount();}//AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK这个参数还记得吗, 在上面我们赋值了.if ((format.getPropertySetMask()& AudioFormat.AUDIO_FORMAT_HAS_PROPERTY_CHANNEL_MASK) != 0) {// 计算mChannelMask mChannelMask = getChannelMaskFromLegacyConfig(format.getChannelMask(), false);mChannelCount = format.getChannelCount();} else if (mChannelIndexMask == 0) {mChannelMask = getChannelMaskFromLegacyConfig(AudioFormat.CHANNEL_IN_DEFAULT, false);mChannelCount =  AudioFormat.channelCountFromInChannelMask(mChannelMask);}audioBuffSizeCheck(bufferSizeInBytes);int[] sampleRate = new int[] {mSampleRate};int[] session = new int[1];session[0] = sessionId;//TODO: update native initialization when information about hardware init failure//      due to capture device already open is available.// 重要步骤, 调用JNI的方法native_setup,传入了//sampleRate(16000),//mChannelMask(立体声:CHANNEL_IN_STEREO = 12),//mChannelIndexMask(未使用,0),//mAudioFormat(ENCODING_PCM_16BIT = 2)int initResult = native_setup( new WeakReference<AudioRecord>(this),mAudioAttributes, sampleRate, mChannelMask, mChannelIndexMask,mAudioFormat, mNativeBufferSizeInBytes,session, getCurrentOpPackageName(), 0 /*nativeRecordInJavaObj*/);if (initResult != SUCCESS) {loge("Error code "+initResult+" when initializing native AudioRecord object.");return; // with mState == STATE_UNINITIALIZED}mSampleRate = sampleRate[0];mSessionId = session[0];mState = STATE_INITIALIZED;}

这里补充说明下AudioRecorder类中有关的两个Mask的差异

    /*** The audio channel position mask* 用来标记通道位置的掩码,比如左前,右前,中*/private int mChannelMask;/*** The audio channel index mask* 用来标记通道索引的掩码*/private int mChannelIndexMask;

AudioRecord 中的 mChannelMask 与 mChannelIndexMask 区别分析

一、mChannelMask:Channel Position Mask

全称:Channel Position Mask(通道“位置”掩码)

作用:描述音频每个通道的空间位置含义(例如:左、右、前、后等)

常见值:

       AudioFormat.CHANNEL_IN_MONO(单声道 = FRONTAudioFormat.CHANNEL_IN_STEREO(立体声 = LEFT + RIGHTAudioFormat.CHANNEL_IN_LEFTAudioFormat.CHANNEL_IN_RIGHT

适用场景:

普通麦克风录音

常规的音频输入配置

✅ 更具人类语义,使用方便,适合大部分常规用途

二、mChannelIndexMask:Channel Index Mask

全称:Channel Index Mask(通道“索引”掩码)

作用:通道数据由具体索引表示(如第 0 通道、第 1 通道)

常见值(组合使用):

        AudioFormat.CHANNEL_INDEX_MASK_0AudioFormat.CHANNEL_INDEX_MASK_1

适用场景:

多麦克风阵列(如车载阵列、智能音箱)

高级音频处理(如波束成形、声源定位)

自定义通道顺序或无具体空间语义时

更灵活、可扩展,但不包含语义,需自行定义每个通道含义


对比差异如下:

属性mChannelMaskmChannelIndexMask
表示方式通道空间位置(如 LEFT、RIGHT)通道数组索引(如 index 0, 1)
语义表达明确(空间含义)无语义(需业务定义)
使用便捷性较复杂
应用场景普通音频采集多通道音频处理(如阵列麦克)
是否可共存否,通常仅使用其一否,优先使用 channelIndexMask
6.调用JNI方法native_setup()

该方法的定义类如下:
@android11.0/frameworks/base/core/jni/android_media_AudioRecord.cpp

{"native_start",         "(II)I",    (void *)android_media_AudioRecord_start},{"native_stop",          "()V",    (void *)android_media_AudioRecord_stop},{"native_setup",         "(Ljava/lang/Object;Ljava/lang/Object;[IIIII[ILjava/lang/String;J)I",(void *)android_media_AudioRecord_setup},

native_start对应的JNI方法为: android_media_AudioRecord_setup

7.调用android_media_AudioRecord_setup:
static jint
android_media_AudioRecord_setup(JNIEnv *env, jobject thiz, jobject weak_this,jobject jaa, jintArray jSampleRate, jint channelMask, jint channelIndexMask,jint audioFormat, jint buffSizeInBytes, jintArray jSession, jstring opPackageName,jlong nativeRecordInJavaObj)
{//ALOGV(">> Entering android_media_AudioRecord_setup");//ALOGV("sampleRate=%d, audioFormat=%d, channel mask=%x, buffSizeInBytes=%d "//     "nativeRecordInJavaObj=0x%llX",//     sampleRateInHertz, audioFormat, channelMask, buffSizeInBytes, nativeRecordInJavaObj);// 7.1 这里有一个channelMask的转化动作,传入什么值,返回相同的值,基本没有啥变化audio_channel_mask_t localChanMask = inChannelMaskToNative(channelMask);if (jSession == NULL) {ALOGE("Error creating AudioRecord: invalid session ID pointer");return (jint) AUDIO_JAVA_ERROR;}jint* nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);if (nSession == NULL) {ALOGE("Error creating AudioRecord: Error retrieving session id pointer");return (jint) AUDIO_JAVA_ERROR;}audio_session_t sessionId = (audio_session_t) nSession[0];env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);nSession = NULL;sp<AudioRecord> lpRecorder = 0;audiorecord_callback_cookie *lpCallbackData = NULL;jclass clazz = env->GetObjectClass(thiz);if (clazz == NULL) {ALOGE("Can't find %s when setting up callback.", kClassPathName);return (jint) AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;}// if we pass in an existing *Native* AudioRecord, we don't need to create/initialize one.if (nativeRecordInJavaObj == 0) {if (jaa == 0) {ALOGE("Error creating AudioRecord: invalid audio attributes");return (jint) AUDIO_JAVA_ERROR;}if (jSampleRate == 0) {ALOGE("Error creating AudioRecord: invalid sample rates");return (jint) AUDIO_JAVA_ERROR;}jint elements[1];env->GetIntArrayRegion(jSampleRate, 0, 1, elements);int sampleRateInHertz = elements[0];// channel index mask takes priority over channel position masks.// 检查是否设置了channelIndexMask, 如果设置了,就会覆盖position masks(channelMask).if (channelIndexMask) {// Java channel index masks need the representation bits set.localChanMask = audio_channel_mask_from_representation_and_bits(AUDIO_CHANNEL_REPRESENTATION_INDEX,channelIndexMask);}// Java channel position masks map directly to the native definition// 检查计算的出来的localChanMask是否合法if (!audio_is_input_channel(localChanMask)) {ALOGE("Error creating AudioRecord: channel mask %#x is not valid.", localChanMask);return (jint) AUDIORECORD_ERROR_SETUP_INVALIDCHANNELMASK;}// 计算localChanMask中1的个数,用来表示channelCount ,比如传入的是CHANNEL_IN_STEREO = 12,对应就是4+8,即1100,包含2个通道.uint32_t channelCount = audio_channel_count_from_in_mask(localChanMask);// compare the format against the Java constants// 7.2 java层传入的audioFormat需要经过一个转换计算audio_format_t format = audioFormatToNative(audioFormat);if (format == AUDIO_FORMAT_INVALID) {ALOGE("Error creating AudioRecord: unsupported audio format %d.", audioFormat);return (jint) AUDIORECORD_ERROR_SETUP_INVALIDFORMAT;}// 计算每个采样点的位深度,传入的是ENCODING_PCM_16BIT = 2, 对应返回的值16.size_t bytesPerSample = audio_bytes_per_sample(format);if (buffSizeInBytes == 0) {ALOGE("Error creating AudioRecord: frameCount is 0.");return (jint) AUDIORECORD_ERROR_SETUP_ZEROFRAMECOUNT;}size_t frameSize = channelCount * bytesPerSample;size_t frameCount = buffSizeInBytes / frameSize;ScopedUtfChars opPackageNameStr(env, opPackageName);// create an uninitialized AudioRecord objectlpRecorder = new AudioRecord(String16(opPackageNameStr.c_str()));// read the AudioAttributes valuesauto paa = JNIAudioAttributeHelper::makeUnique();jint jStatus = JNIAudioAttributeHelper::nativeFromJava(env, jaa, paa.get());if (jStatus != (jint)AUDIO_JAVA_SUCCESS) {return jStatus;}ALOGV("AudioRecord_setup for source=%d tags=%s flags=%08x", paa->source, paa->tags, paa->flags);audio_input_flags_t flags = AUDIO_INPUT_FLAG_NONE;if (paa->flags & AUDIO_FLAG_HW_HOTWORD) {flags = AUDIO_INPUT_FLAG_HW_HOTWORD;}// create the callback information:// this data will be passed with every AudioRecord callbacklpCallbackData = new audiorecord_callback_cookie;lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);// we use a weak reference so the AudioRecord object can be garbage collected.lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);lpCallbackData->busy = false;// 调用AudioRecord的set()方法,设置参数. const status_t status = lpRecorder->set(paa->source,sampleRateInHertz,format,        // word length, PCMlocalChanMask,frameCount,recorderCallback,// callback_tlpCallbackData,// void* user0,             // notificationFrames,true,          // threadCanCallJavasessionId,AudioRecord::TRANSFER_DEFAULT,flags,-1, -1,        // default uid, pidpaa.get());if (status != NO_ERROR) {ALOGE("Error creating AudioRecord instance: initialization check failed with status %d.",status);goto native_init_failure;}// Set caller name so it can be logged in destructor.// MediaMetricsConstants.h: AMEDIAMETRICS_PROP_CALLERNAME_VALUE_JAVAlpRecorder->setCallerName("java");} else { // end if nativeRecordInJavaObj == 0)lpRecorder = (AudioRecord*)nativeRecordInJavaObj;// TODO: We need to find out which members of the Java AudioRecord might need to be// initialized from the Native AudioRecord// these are directly returned from getters://  mSampleRate//  mRecordSource//  mAudioFormat//  mChannelMask//  mChannelCount//  mState (?)//  mRecordingState (?)//  mPreferredDevice// create the callback information:// this data will be passed with every AudioRecord callbacklpCallbackData = new audiorecord_callback_cookie;lpCallbackData->audioRecord_class = (jclass)env->NewGlobalRef(clazz);// we use a weak reference so the AudioRecord object can be garbage collected.lpCallbackData->audioRecord_ref = env->NewGlobalRef(weak_this);lpCallbackData->busy = false;}nSession = (jint *) env->GetPrimitiveArrayCritical(jSession, NULL);if (nSession == NULL) {ALOGE("Error creating AudioRecord: Error retrieving session id pointer");goto native_init_failure;}// read the audio session ID back from AudioRecord in case a new session was created during set()nSession[0] = lpRecorder->getSessionId();env->ReleasePrimitiveArrayCritical(jSession, nSession, 0);nSession = NULL;{const jint elements[1] = { (jint) lpRecorder->getSampleRate() };env->SetIntArrayRegion(jSampleRate, 0, 1, elements);}{   // scope for the lockMutex::Autolock l(sLock);sAudioRecordCallBackCookies.add(lpCallbackData);}// save our newly created C++ AudioRecord in the "nativeRecorderInJavaObj" field// of the Java objectsetAudioRecord(env, thiz, lpRecorder);// save our newly created callback information in the "nativeCallbackCookie" field// of the Java object (in mNativeCallbackCookie) so we can free the memory in finalize()env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, (jlong)lpCallbackData);return (jint) AUDIO_JAVA_SUCCESS;// failure:
native_init_failure:env->DeleteGlobalRef(lpCallbackData->audioRecord_class);env->DeleteGlobalRef(lpCallbackData->audioRecord_ref);delete lpCallbackData;env->SetLongField(thiz, javaAudioRecordFields.nativeCallbackCookie, 0);// lpRecorder goes out of scope, so reference count drops to zeroreturn (jint) AUDIORECORD_ERROR_SETUP_NATIVEINITFAILED;
}// 根据之前创建好的AudioRecord句柄,获取AudioRecord的实例,这种方法在JNI和Java之间的交互很常用
static sp<AudioRecord> getAudioRecord(JNIEnv* env, jobject thiz)
{Mutex::Autolock l(sLock);AudioRecord* const ar =(AudioRecord*)env->GetLongField(thiz, javaAudioRecordFields.nativeRecorderInJavaObj);return sp<AudioRecord>(ar);
}
7.1 inChannelMaskToNative,把JAVA层传入的mask值转换成Native端的值:

当前mask传入的值是CHANNEL_IN_STEREO = 12

static inline audio_channel_mask_t inChannelMaskToNative(int channelMask)
{switch (channelMask) {case CHANNEL_IN_DEFAULT:return AUDIO_CHANNEL_NONE;default:return (audio_channel_mask_t)channelMask;}
}
7.2 java层传入的audioFormat需要经过一个转换计算audioFormatToNative:

当Java 端传入的audioFormat为ENCODING_PCM_16BIT = 2,计算后的值为 AUDIO_FORMAT_PCM_16_BIT = 0x1u. 从log中来看, 在Java端打印的audioFormat是2, 到这里之后就是0x1u了,如下面这份Log中打印的 “format 0x1”

2025-05-30 11:49:28.308 2330-2330/? V/AudioRecord: set(): inputSource 1, sampleRate 16000, format 0x1, channelMask 0x8000000f, frameCount 160, notificationFrames 0, sessionId 0, transferType 0, flags 0, opPackageName com.test.dummy uid -1, pid -1
static inline audio_format_t audioFormatToNative(int audioFormat)
{switch (audioFormat) {case ENCODING_PCM_16BIT:return AUDIO_FORMAT_PCM_16_BIT;case ENCODING_PCM_8BIT:return AUDIO_FORMAT_PCM_8_BIT;case ENCODING_PCM_FLOAT:return AUDIO_FORMAT_PCM_FLOAT;case ENCODING_AC3:return AUDIO_FORMAT_AC3;case ENCODING_E_AC3:return AUDIO_FORMAT_E_AC3;case ENCODING_DTS:return AUDIO_FORMAT_DTS;case ENCODING_DTS_HD:return AUDIO_FORMAT_DTS_HD;case ENCODING_MP3:return AUDIO_FORMAT_MP3;case ENCODING_AAC_LC:return AUDIO_FORMAT_AAC_LC;case ENCODING_AAC_HE_V1:return AUDIO_FORMAT_AAC_HE_V1;case ENCODING_AAC_HE_V2:return AUDIO_FORMAT_AAC_HE_V2;case ENCODING_IEC61937:return AUDIO_FORMAT_IEC61937;case ENCODING_DOLBY_TRUEHD:return AUDIO_FORMAT_DOLBY_TRUEHD;case ENCODING_AAC_ELD:return AUDIO_FORMAT_AAC_ELD;case ENCODING_AAC_XHE:return AUDIO_FORMAT_AAC_XHE;case ENCODING_AC4:return AUDIO_FORMAT_AC4;case ENCODING_E_AC3_JOC:return AUDIO_FORMAT_E_AC3_JOC;case ENCODING_DEFAULT:return AUDIO_FORMAT_DEFAULT;case ENCODING_DOLBY_MAT:return AUDIO_FORMAT_MAT;case ENCODING_OPUS:return AUDIO_FORMAT_OPUS;default:return AUDIO_FORMAT_INVALID;}
}

这些常亮在哪里定义的呢?
/* android11.0/frameworks/base/core/jni/android_media_AudioFormat.h */

#define ENCODING_PCM_16BIT      2
#define ENCODING_PCM_8BIT       3
#define ENCODING_PCM_FLOAT      4
#define ENCODING_AC3            5
#define ENCODING_E_AC3          6
#define ENCODING_DTS            7
#define ENCODING_DTS_HD         8
#define ENCODING_MP3            9
#define ENCODING_AAC_LC         10
#define ENCODING_AAC_HE_V1      11
#define ENCODING_AAC_HE_V2      12
#define ENCODING_IEC61937       13
#define ENCODING_DOLBY_TRUEHD   14
#define ENCODING_AAC_ELD        15
#define ENCODING_AAC_XHE        16
#define ENCODING_AC4            17
#define ENCODING_E_AC3_JOC      18
#define ENCODING_DOLBY_MAT      19
#define ENCODING_OPUS           20

转换后对应值如下:
/*android11.0/system/media/audio/include/system/audio-base.h */

    /* Aliases */AUDIO_FORMAT_PCM_16_BIT            = 0x1u,        // (PCM | PCM_SUB_16_BIT)AUDIO_FORMAT_PCM_8_BIT             = 0x2u,        // (PCM | PCM_SUB_8_BIT)AUDIO_FORMAT_PCM_32_BIT            = 0x3u,        // (PCM | PCM_SUB_32_BIT)AUDIO_FORMAT_PCM_8_24_BIT          = 0x4u,        // (PCM | PCM_SUB_8_24_BIT)AUDIO_FORMAT_PCM_FLOAT             = 0x5u,        // (PCM | PCM_SUB_FLOAT)AUDIO_FORMAT_PCM_24_BIT_PACKED     = 0x6u,        // (PCM | PCM_SUB_24_BIT_PACKED)AUDIO_FORMAT_AAC_MAIN              = 0x4000001u,  // (AAC | AAC_SUB_MAIN)AUDIO_FORMAT_AAC_LC                = 0x4000002u,  // (AAC | AAC_SUB_LC)AUDIO_FORMAT_AAC_SSR               = 0x4000004u,  // (AAC | AAC_SUB_SSR)AUDIO_FORMAT_AAC_LTP               = 0x4000008u,  // (AAC | AAC_SUB_LTP)AUDIO_FORMAT_AAC_HE_V1             = 0x4000010u,  // (AAC | AAC_SUB_HE_V1)AUDIO_FORMAT_AAC_SCALABLE          = 0x4000020u,  // (AAC | AAC_SUB_SCALABLE)AUDIO_FORMAT_AAC_ERLC              = 0x4000040u,  // (AAC | AAC_SUB_ERLC)AUDIO_FORMAT_AAC_LD                = 0x4000080u,  // (AAC | AAC_SUB_LD)AUDIO_FORMAT_AAC_HE_V2             = 0x4000100u,  // (AAC | AAC_SUB_HE_V2)AUDIO_FORMAT_AAC_ELD               = 0x4000200u,  // (AAC | AAC_SUB_ELD)AUDIO_FORMAT_AAC_XHE               = 0x4000300u,  // (AAC | AAC_SUB_XHE)AUDIO_FORMAT_AAC_ADTS_MAIN         = 0x1e000001u, // (AAC_ADTS | AAC_SUB_MAIN)AUDIO_FORMAT_AAC_ADTS_LC           = 0x1e000002u, // (AAC_ADTS | AAC_SUB_LC)AUDIO_FORMAT_AAC_ADTS_SSR          = 0x1e000004u, // (AAC_ADTS | AAC_SUB_SSR)AUDIO_FORMAT_AAC_ADTS_LTP          = 0x1e000008u, // (AAC_ADTS | AAC_SUB_LTP)AUDIO_FORMAT_AAC_ADTS_HE_V1        = 0x1e000010u, // (AAC_ADTS | AAC_SUB_HE_V1)AUDIO_FORMAT_AAC_ADTS_SCALABLE     = 0x1e000020u, // (AAC_ADTS | AAC_SUB_SCALABLE)AUDIO_FORMAT_AAC_ADTS_ERLC         = 0x1e000040u, // (AAC_ADTS | AAC_SUB_ERLC)AUDIO_FORMAT_AAC_ADTS_LD           = 0x1e000080u, // (AAC_ADTS | AAC_SUB_LD)AUDIO_FORMAT_AAC_ADTS_HE_V2        = 0x1e000100u, // (AAC_ADTS | AAC_SUB_HE_V2)AUDIO_FORMAT_AAC_ADTS_ELD          = 0x1e000200u, // (AAC_ADTS | AAC_SUB_ELD)AUDIO_FORMAT_AAC_ADTS_XHE          = 0x1e000300u, // (AAC_ADTS | AAC_SUB_XHE)AUDIO_FORMAT_AAC_LATM_LC           = 0x25000002u, // (AAC_LATM | AAC_SUB_LC)AUDIO_FORMAT_AAC_LATM_HE_V1        = 0x25000010u, // (AAC_LATM | AAC_SUB_HE_V1)AUDIO_FORMAT_AAC_LATM_HE_V2        = 0x25000100u, // (AAC_LATM | AAC_SUB_HE_V2)AUDIO_FORMAT_E_AC3_JOC             = 0xA000001u,  // (E_AC3 | E_AC3_SUB_JOC)AUDIO_FORMAT_MAT_1_0               = 0x24000001u, // (MAT | MAT_SUB_1_0)AUDIO_FORMAT_MAT_2_0               = 0x24000002u, // (MAT | MAT_SUB_2_0)AUDIO_FORMAT_MAT_2_1               = 0x24000003u, // (MAT | MAT_SUB_2_1)
} audio_format_t;

到这里,Java层创建AudioRecod,到JNI 层初始化的流程已处理完成. 后续的分析请继续关注.

“专注AAOS架构与实战,欢迎关注一起探索车载开发。”

相关文章:

AAOS系列之(七) --- AudioRecord录音逻辑分析(一)

一文讲透AAOS架构&#xff0c;点到为止不藏私 &#x1f4cc; 这篇帖子给大家分析下 AudioRecord的初始化 1. 场景介绍: 在 AAOS 的 Framework 开发中&#xff0c;录音模块几乎是每个项目都会涉及的重要组成部分。无论是语音控制、车内对讲&#xff08;同行者模式&#xff09;…...

MySQL大表结构变更利器:pt-online-schema-change原理与实战指南

MySQL大表结构变更利器:pt-online-schema-change原理与实战指南 MySQL数据库运维中,最令人头疼的问题之一莫过于对大表进行结构变更(DDL操作)。传统的ALTER TABLE操作会锁表,导致业务长时间不可用,这在724小时运行的互联网业务中是不可接受的。本文将深入剖析Percona To…...

LangChain【3】之进阶内容

文章目录 说明一 LangChain Chat Model1.1 少量示例提示(Few-Shot Prompting)1.2 Few-Shot示例代码1.3 示例选择器&#xff08;Eample selectors&#xff09;1.4 ExampleSelector 类型1.5 ExampleSelector案例代码1.6 LangServe工具1.7 LangServe安装1.8 langchain项目结构1.9 …...

大规模JSON反序列化性能优化实战:Jackson vs FastJSON深度对比与定制化改造

背景&#xff1a;500KB JSON处理的性能挑战 在当今互联网复杂业务场景中&#xff0c;处理500KB以上的JSON数据已成为常态。 常规反序列化方案在CPU占用&#xff08;超30%&#xff09;和内存峰值&#xff08;超原始数据3-5倍&#xff09;方面表现堪忧。 本文通过Jackson与Fas…...

【OpenSearch】高性能 OpenSearch 数据导入

高性能 OpenSearch 数据导入 1.导入依赖库2.配置参数3.OpenSearch 客户端初始化4.创建索引函数5.数据生成器6.批量处理函数7.主导入函数7.1 函数定义和索引创建7.2 优化索引设置&#xff08;导入前&#xff09;7.3 初始化变量和打印开始信息7.4 线程池设置7.5 主数据生成和导入…...

HTML5有那些更新

语义化标签 header 头部nav 导航栏footer 底部aside 内容的侧边栏 媒体标签 audio 音频播放video 视频播放 dom查询 document.querySelector,document.querySelectorAll他们选择的对象可以是标签,也可以是类(需要加点),也可以是ID(需要加#) web存储 localStorage和sessi…...

AWS EC2 实例告警的创建与删除

在AWS云环境中&#xff0c;监控EC2实例的运行状态至关重要。通过CloudWatch告警&#xff0c;用户可以实时感知实例的CPU、网络、磁盘等关键指标异常。本文将详细介绍如何通过AWS控制台创建EC2实例告警&#xff0c;以及如何安全删除不再需要的告警规则&#xff0c;并附操作截图与…...

STM32 搭配 嵌入式SD卡在智能皮电手环中的应用全景评测

在智能皮电手环及数据存储技术不断迭代的当下&#xff0c;主控 MCU STM32H750 与存储 SD NAND MKDV4GIL-AST 的强强联合&#xff0c;正引领行业进入全新发展阶段。二者凭借低功耗、高速读写与卓越稳定性的深度融合&#xff0c;以及高容量低成本的突出优势&#xff0c;成为大规模…...

黑马点评项目01——短信登录以及登录校验的细节

1.短信登录 1.1 Session方式实现 前端点击发送验证码&#xff0c;后端生成验证码后&#xff0c;向session中存放键值对&#xff0c;键是"code"&#xff0c;值是验证码&#xff1b;然后&#xff0c;后端生成sessionID以Cookie的方式发给前端&#xff0c;前端拿到后&a…...

【笔记】Windows 系统安装 Scoop 包管理工具

#工作记录 一、问题背景 在进行开源项目 Suna 部署过程中&#xff0c;执行设置向导时遭遇报错&#xff1a;❌ Supabase CLI is not installed. 根据资料检索&#xff0c;需通过 Windows 包管理工具Scoop安装 Supabase CLI。 初始尝试以管理员身份运行 PowerShell 安装 Scoop…...

LVS + Keepalived高可用群集

目录 一&#xff1a;keepalived双击热备基础知识 1.keepalived概述及安装 1.1keepalived的热备方式 1.2keepalived的安装与服务控制 &#xff08;1&#xff09;安装keepalived &#xff08;2&#xff09;控制keepalived服务 2.使用keepalived实现双击热备. 2.1主服务器的…...

MySQL之约束和表的增删查改

MySQL之约束和表的增删查改 一.数据库约束1.1数据库约束的概念1.2NOT NULL 非空约束1.3DEFAULT 默认约束1.4唯一约束1.5主键约束和自增约束1.6自增约束1.7外键约束1.8CHECK约束 二.表的增删查改2.1Create创建2.2Retrieve读取2.3Update更新2.4Delete删除和Truncate截断 一.数据库…...

Greenplum:PB级数据分析的分布式引擎,揭开MPP架构的终极武器

一、Greenplum是谁&#xff1f;—— 定位与诞生背景 核心定位&#xff1a;基于PostgreSQL的开源分布式分析型数据库&#xff08;OLAP&#xff09;&#xff0c;专为海量数据分析设计&#xff0c;支撑PB级数据仓库、商业智能&#xff08;BI&#xff09;和实时决策系统。 诞生背…...

Oracle数据库性能优化的最佳实践

原创&#xff1a;厦门微思网络 以下是 Oracle 数据库性能优化的最佳实践&#xff0c;涵盖设计、SQL 优化、索引管理、系统配置等关键维度&#xff0c;帮助提升数据库响应速度和稳定性&#xff1a; 一、SQL 语句优化 1. 避免全表扫描&#xff08;Full Table Scan&#xff09;…...

云原生时代 Kafka 深度实践:02快速上手与环境搭建

2.1 本地开发环境搭建 单机模式安装 下载与解压&#xff1a;前往Apache Kafka 官网&#xff0c;下载最新稳定版本的 Kafka 二进制包&#xff08;如kafka_2.13-3.6.0.tgz&#xff0c;其中2.13为 Scala 版本&#xff09;。解压到本地目录&#xff0c;例如/opt/kafka&#xff1a…...

Redis7 新增数据结构深度解析:ListPack 的革新与优化

Redis 作为高性能的键值存储系统&#xff0c;其核心优势之一在于丰富的数据结构。随着版本迭代&#xff0c;Redis 不断优化现有结构并引入新特性。在 Redis 7.0 中&#xff0c;ListPack 作为新一代序列化格式正式登场&#xff0c;替代了传统的 ZipList&#xff08;压缩列表&…...

分布式爬虫架构设计

随着互联网数据的爆炸式增长&#xff0c;单机爬虫已经难以满足大规模数据采集的需求。分布式爬虫应运而生&#xff0c;它通过多节点协作&#xff0c;实现了数据采集的高效性和容错性。本文将深入探讨分布式爬虫的架构设计&#xff0c;包括常见的架构模式、关键技术组件、完整项…...

汽配快车道:助力汽车零部件行业的产业重构与数字化出海

汽配快车道&#xff1a;助力汽车零部件行业的数字化升级与出海解决方案。 在当今快速发展的汽车零部件市场中&#xff0c;随着消费者对汽车性能、安全和舒适性的要求不断提高&#xff0c;汽车刹车助力系统作为汽车安全的关键部件之一&#xff0c;其市场需求也在持续增长。汽车…...

Windows 11 家庭版 安装Docker教程

Windows 家庭版需要通过脚本手动安装 Hyper-V 一、前置检查 1、查看系统 快捷键【winR】&#xff0c;输入“control” 【控制面板】—>【系统和安全】—>【系统】 2、确认虚拟化 【任务管理器】—【性能】 二、安装Hyper-V 1、创建并运行安装脚本 在桌面新建一个 .…...

PyQt6基础_QtCharts绘制横向柱状图

前置&#xff1a; pip install PyQt6-Charts 结果&#xff1a; 代码&#xff1a; import sysfrom PyQt6.QtCharts import (QBarCategoryAxis, QBarSet, QChart,QChartView, QValueAxis,QHorizontalBarSeries) from PyQt6.QtCore import Qt,QSize from PyQt6.QtGui import QP…...

《TCP/IP 详解 卷1:协议》第2章:Internet 地址结构

基本的IP地址结构 分类寻址 早期Internet采用分类地址&#xff08;Classful Addressing&#xff09;&#xff0c;将IPv4地址划分为五类&#xff1a; A类和B类网络号通常浪费太多主机号&#xff0c;而C类网络号不能为很多站点提供足够的主机号。 子网寻址 子网&#xff08;Su…...

Python学习(5) ----- Python的JSON处理

下面是关于 Python 中如何全面处理 JSON 的详细说明&#xff0c;包括模块介绍、数据类型映射、常用函数、文件操作、异常处理、进阶技巧等。 &#x1f9e9; 一、什么是 JSON&#xff1f; JSON&#xff08;JavaScript Object Notation&#xff09;是一种轻量级的数据交换格式&a…...

如何通过一次需求评审,让项目效率提升50%?

想象一下&#xff0c;你的团队启动了一个新项目&#xff0c;但需求模糊不清&#xff0c;开发到一半才发现方向错了&#xff0c;返工、加班、客户投诉接踵而至……听起来像噩梦&#xff1f;一次完美的需求评审就能避免这一切&#xff01;它就像项目的“导航仪”&#xff0c;确保…...

再见Notepad++,你好Notepad--

Notepad-- 是一款国产开源的轻量级、跨平台文本编辑器&#xff0c;支持 Window、Linux、macOS 以及国产 UOS、麒麟等操作系统。 除了具有常用编辑器的功能之外&#xff0c;Notepad-- 还内置了专业级的代码对比功能&#xff0c;支持文件、文件夹、二进制文件的比对&#xff0c;支…...

element-plus bug整理

1.el-table嵌入el-image标签预览时&#xff0c;显示错乱 解决&#xff1a;添加preview-teleported属性 <el-table-column label"等级图标" align"center" prop"icon" min-width"80"><template #default"scope"&g…...

技术-工程-管用养修保-智能硬件-智能软件五维黄金序位模型

融智学工程技术体系&#xff1a;五维协同架构 基于邹晓辉教授的框架&#xff0c;工程技术体系重构为&#xff1a;技术-工程-管用养修保-智能硬件-智能软件五维黄金序位模型&#xff1a; math \mathbb{E}_{\text{技}} \underbrace{\prod_{\text{Dis}} \text{TechnoCore}}_{\…...

LangChain-自定义Tool和Agent结合DeepSeek应用实例

除了调用LangChain内置工具外&#xff0c;也可以自定义工具 实例1&#xff1a; 自定义多个工具 from langchain.agents import initialize_agent, AgentType from langchain_community.agent_toolkits.load_tools import load_tools from langchain_core.tools import tool, …...

用 3D 可视化颠覆你的 JSON 数据体验

大家好&#xff0c;这里是架构资源栈&#xff01;点击上方关注&#xff0c;添加“星标”&#xff0c;一起学习大厂前沿架构&#xff01; 复杂的 JSON 数据结构常常让人头疼&#xff1a;层层嵌套的对象、错综复杂的数组关系&#xff0c;用传统的树状视图或表格一览千头万绪&…...

联想小新笔记本电脑静电问题导致无法开机/充电的解决方案

一、问题背景 近期部分用户反馈联想小新系列笔记本电脑在特定环境下&#xff08;如秋冬干燥季节&#xff09;出现无法开机或充电的问题。经分析&#xff0c;此类现象多由静电积累触发主板保护机制导致&#xff0c;少数情况可能与电源适配器、电池老化或环境因素相关。本文将从技…...

MVCC(多版本并发控制)机制

1. MVCC&#xff08;多版本并发控制&#xff09;机制 MVCC 的核心就是 Undo Log Read View&#xff0c;“MV”就是通过 Undo Log 来保存数据的历史版本&#xff0c;实现多版本的管理&#xff0c;“CC”是通过 Read View 来实现管理&#xff0c;通过 Read View 原则来决定数据是…...