创建项目工程
使用Xcode创建一个新的项目。
配置步骤
1. 下载SDK
将IDRSSDK.framework文件目录中的Resources.bundle拷贝一份出来,将IDRSSDK.framework和Resources.bundle一起,复制一份到您的app文件夹下
2. 配置参数
a. 将IDRSSDK.framework动态库添加进来,同时添加其他必要的库文件,如图所示:
b. 将Resources.bundle资源添加进来,如图所示:
c. 设置Other Linker Flags为-ObjC,如图所示:
d. 增加支持http协议允许,如图所示:
e. 允许使用图片库、相机、麦克风、照片库和照片权限
iOS端流程图
iOS端类图
主要类介绍
类 | 介绍 |
---|---|
RTCSampleChatViewController | 主要负责界面布局,操作流程,RTC SDK的初始化及实现,IDRSSDKSDK的初始化及实现 |
idrs_NetWorking | 网络及签名的封装 |
FaceDetectView | 人脸框显示(人脸框绘图) |
IDRSSDK.h | AI检测 |
RTCSampleChatViewController主要方法介绍
方法 | 介绍 |
---|---|
initRTCSDK | 初始化RTC SDK |
initIDRSSDKSDK | 初始化IDRSSDK SDK |
startPreview | 本地视频预览(包含订阅视频流) |
publish | 本地视频推流 |
JoinChannel | 正式加入会议(包含订阅音频流) |
onSubscribeChangedNotify | 远端订阅回调(获取远端视频流及远端窗口个数) |
onCaptureVideoSample | 本地视频流回调(需要检测的本地流信息在此处返回) |
onRemoteVideoSample | 远端视频流回调(需要检测的远端流信息在此处返回) |
onAudioSampleCallback | 音频流回调(订阅的是哪端,返回的就是哪端的音频流) |
idrs_NetWorking主要的方法介绍
方法 | 介绍 |
---|---|
HttpPutWithMethod | put请求方法封装 |
HttpGetWithMethod | get请求方法封装 |
HTTPWithMethod | pop请求封装(包含签名) |
FaceDetectView主要的方法介绍
方法 | 介绍 |
---|---|
drawRect | 边框绘制(包含显示位置) |
setDetectResult | 人脸信息处理 |
IDRSSDK.h主要的方法介绍
方法 | 介绍 |
---|---|
detectFace | 检测人脸特征值 |
detectIDCard | 检测身份证 |
detectHandGesture | 检测动态手势 |
detectHandStaticGesture | 检测静态手势 |
faceRecognitionSimilarity | 人照对比 |
faceTrackFromImage | 检测图片中的人脸 |
faceTrackFromVideo | 检测视频流中的人脸 |
faceTrackFromRemoteVideo | 检测RTC远端视频流中的人脸 |
startDialog | 开启激活词检测 |
stopDialog | 关闭激活词检测 |
feedAudioFrame | 设置外部音频流 |
会议相关
1. 加入会议
角色介绍
- 保险代理:负责创建会议,调用创建会议接口和加入会议接口。
- 投保人和受益人:负责加入会议,只调用加入会议接口。
接口的调用
首先需要调用创建会议API接口(由保险代理调用)获取会议码,会议码通过API接口获取,接口如下:
//具体代码位置:FaceSDKDemoFirstViewController.m
//接口实现位置:idrs_NetWorking.m
NSDictionary * userinfo = @{@"Action":@"CreateLive",@"AppId":@"ulw07lvw-1",@"Name":@"张三",@"UserId":self.uuid};
//创建会议
[idrs_NetWorking HTTPWithMethod:@"POST" body:userinfo success:^(id _Nonnull responseObject) {
NSLog(@"成功:%@",responseObject);
NSLog(@"\n\n-------\n\n%@",responseObject);
NSDictionary * dic = [responseObject objectForKey:@"Data"];
self.channel = [dic objectForKey:@"Channel"];
dispatch_async(dispatch_get_main_queue(), ^{
self.tipLable.text = [NSString stringWithFormat:@"视频会议码为:%@,请发送给客户,通过输入会议码加入远程双录",self.channel];
});
} failure:^(NSError * _Nonnull error) {
NSLog(@"错误%@",error);
}];
这个接口会返回一个会议码。
- 接下来调用加入会议接口(所有角色都需要调用)并把取到的数据传到RTCSampleChatViewController:
//代码位置:FaceSDKDemoFirstViewController.m
-(void)JoinMettingWithChannel:(NSString*)channelName AndName:(NSString*)userNames{
[RTCSampleUserAuthrization getPassportFromAppServer:channelName userName:userNames success:^(AliRtcAuthInfo *info, NSString *liveId) {
dispatch_async(dispatch_get_main_queue(), ^{
AppDelegate *appDelegate = (AppDelegate *)[UIApplication sharedApplication].delegate;
appDelegate.chatVC = [appDelegate RTCChatView];
appDelegate.chatVC.channelName = channelName;
appDelegate.chatVC.manName = userNames;
appDelegate.chatVC.info = info;
appDelegate.chatVC.userId = info.user_id;
appDelegate.chatVC.liveId = liveId;
appDelegate.chatVC.audioCapture = true;
appDelegate.chatVC.audioPlayer = true;
[self.navigationController pushViewController:appDelegate.chatVC animated:YES];
[self.bgScroll bringSubviewToFront:self.bgView];
});
} failure:^(NSError *error) {
//错误信息处理
}
以下代码调取了getPassportFromAppServer方法:
//代码位置:RTCSampleUserAuthrization.m
//接口实现位置:idrs_NetWorking.m
+ (void)getPassportFromAppServer:(NSString *)channelName userName:(NSString *)name success:(void (^)(AliRtcAuthInfo* info,NSString*liveId))success failure:(void (^)(NSError *error))failure{
__block AliRtcAuthInfo * info = [[AliRtcAuthInfo alloc]init];;
NSString * userId= [[NSUserDefaults standardUserDefaults] objectForKey:@"userId"];
NSDictionary * dic = @{@"Action":@"JoinLive",@"Channel":channelName,@"UserId":userId};
[idrs_NetWorking HTTPWithMethod:@"POST" body:dic success:^(id _Nonnull responseObject) {
if (![responseObject[@"Code"] isEqualToString:@"OK"]) {
failure(responseObject[@"Code"]);
}else{
NSMutableDictionary *loginDic = [[NSMutableDictionary alloc]init];
NSDictionary *dataDic = responseObject[@"Data"][@"TokenData"];
NSArray *keysArray = [dataDic allKeys];
for (NSUInteger i = 0; i < keysArray.count; i++) {
NSString *key = keysArray[i];
NSString *value = dataDic[key];
[loginDic setObject:value forKey:key];
}
info.channel = channelName;
info.appid = loginDic[@"AppId"];
info.nonce = loginDic[@"Nonce"];
info.user_id = loginDic[@"UserId"];
info.token = loginDic[@"Token"];
info.timestamp = [loginDic[@"Timestamp"] longLongValue];
info.gslb = loginDic[@"Gslb"];
NSString* liveId = loginDic[@"LiveId"];
success(info,liveId);
}
} failure:^(NSError * _Nonnull error) {
failure(error);
}];
}
RTCSampleChatViewController是实现远程会议的主界面,接下来的所有操作都是在这个controller中实现的,首先看下初始化:
- (void)initializeSDK{
// 创建SDK实例,注册delegate,extras可以为空
NSMutableDictionary *extrasDic = [[NSMutableDictionary alloc] init];
[extrasDic setValue:@"ENGINE_BASIC_QUALITY_MODE" forKey:@"user_specified_engine_mode"];
[extrasDic setValue:@"SCENE_MUSIC_MODE" forKey:@"user_specified_scene_mode"];
NSError *parseError = nil;
NSData *jsonData = [NSJSONSerialization dataWithJSONObject:extrasDic options:NSJSONWritingPrettyPrinted error:&parseError];
NSString *extras = [[NSString alloc] initWithData:jsonData encoding:NSUTF8StringEncoding];
_engine = [AliRtcEngine sharedInstance:self extras:extras];
[_engine setSubscribeAudioSampleRate:AliRtcAudioSampleRate_16000];
[self.engine setDeviceOrientationMode:(AliRtcOrientationModeAuto)];
}
2. 预览界面的书写
这个上方大概介绍了一些流程,下面在此详细介绍一下:
预览本地
- (void)startPreview{
// 设置本地预览视频
AliVideoCanvas *canvas = [[AliVideoCanvas alloc] init];
AliRenderView *viewLocal = [[AliRenderView alloc] initWithFrame:CGRectMake(self.view.frame.origin.x, self.view.frame.origin.y, self.view.frame.size.height, self.view.frame.size.width)];
canvas.view = viewLocal;
canvas.renderMode = AliRtcRenderModeAuto;
[self.view addSubview:viewLocal];
[self.engine setLocalViewConfig:canvas forTrack:AliRtcVideoTrackCamera];
// 开启本地预览
[self.engine startPreview];
[self.engine registerVideoSampleObserver];
[self startPreview:nil];
}
登陆服务器,并开始推流:
- (void)startPreview:(UIButton *)sender {
//设置自动(手动)模式
[self.engine setAutoPublish:YES withAutoSubscribe:YES];
[self JoinChannel:self.info :self.manName];//登陆
//防止屏幕锁定
[UIApplication sharedApplication].idleTimerDisabled = YES;
//手动推流---登陆成功后调取
//[self.engine configLocalCameraPublish:true];
//[self.engine configLocalAudioPublish:true];
//[self.engine publish:^(int errCode) {
// if (errCode) {
// //处理报错信息
// }
}];
}
正式加入到会议之中
- 根据之前传来的数据(个人信息及当前人物角色)加入频道:
- (void)JoinChannel:(AliRtcAuthInfo*)authInfo :(NSString*)userName{
//加入频道
NSLog(@"%@",authInfo);
[self.engine joinChannel:authInfo name:userName onResult:^(NSInteger errCode) {
//加入频道回调处理
NSLog(@"joinChannel result: %d", (int)errCode);
dispatch_async(dispatch_get_main_queue(), ^{
if (errCode != 0) {
}
[self runLoopGet];
self.isJoinChannel = YES;
});
//订阅音频数据(本地)
[self.engine subscribeAudioData:AliRtcAudiosourcePub];
}];
}
预览远端
@implementation RTCRemoterUserView
{
AliRenderView *viewRemote;
}
- (instancetype)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
//设置远端流界面
CGRect rc = CGRectMake(0, 0, 320, 240);
viewRemote = [[AliRenderView alloc] initWithFrame:rc];
self.backgroundColor = [UIColor clearColor];
CGRect rcc = CGRectMake(0, 0, 320, 240);
_facedetect = [[FaceDetectView alloc] initWithFrame:rcc];
_facedetect.isRemoteWindow = true;
_facedetect.isRTC = false;
[self addSubview:_facedetect];
_handdetect = [[HandDetectView alloc] initWithFrame:rcc];
_handdetect.isRemoteWindow = true;
_handdetect.isRTC = false;
[self addSubview:_handdetect];
}
return self;
}
- (void)updateUserRenderview:(AliRenderView *)view {
view.backgroundColor = [UIColor clearColor];
view.frame = viewRemote.frame;
viewRemote = view;
[self addSubview:viewRemote];
[self bringSubviewToFront:_facedetect];
[self bringSubviewToFront:_handdetect];
}
@end
CollectionView位置:
rc.origin.x = 10;
rc.origin.y = [UIApplication sharedApplication].statusBarFrame.size.height+64;
rc.size = CGSizeMake(320, 240);
UICollectionViewFlowLayout *flowLayout = [[UICollectionViewFlowLayout alloc] init];
flowLayout.itemSize = CGSizeMake(320, 240);
flowLayout.minimumLineSpacing = 10;
flowLayout.minimumInteritemSpacing = 10;
flowLayout.scrollDirection = UICollectionViewScrollDirectionHorizontal;
self.remoteUserView = [[UICollectionView alloc] initWithFrame:CGRectZero collectionViewLayout:flowLayout];
self.remoteUserView.frame = rc;
self.remoteUserView.backgroundColor = [UIColor clearColor];
self.remoteUserView.delegate = self;
self.remoteUserView.dataSource = self;
self.remoteUserView.showsHorizontalScrollIndicator = NO;
[self.remoteUserView registerClass:[RTCRemoterUserView class] forCellWithReuseIdentifier:@"cell"];
[self.view addSubview:self.remoteUserView];
_remoteUserManager = [RTCSampleRemoteUserManager shareManager];
远端窗口监听:(如果远端窗口数量发生变化则调用此方法)
- (void)onSubscribeChangedNotify:(NSString *)uid audioTrack:(AliRtcAudioTrack)audioTrack videoTrack:(AliRtcVideoTrack)videoTrack {
//收到远端订阅回调
dispatch_async(dispatch_get_main_queue(), ^{
[self.remoteUserManager updateRemoteUser:uid forTrack:videoTrack];
if (videoTrack == AliRtcVideoTrackCamera) {
AliVideoCanvas *canvas = [[AliVideoCanvas alloc] init];
canvas.renderMode = AliRtcRenderModeAuto;
canvas.view = [self.remoteUserManager cameraView:uid];
[self.engine setRemoteViewConfig:canvas uid:uid forTrack:AliRtcVideoTrackCamera];
}else if (videoTrack == AliRtcVideoTrackScreen) {
AliVideoCanvas *canvas2 = [[AliVideoCanvas alloc] init];
canvas2.renderMode = AliRtcRenderModeAuto;
canvas2.view = [self.remoteUserManager screenView:uid];
[self.engine setRemoteViewConfig:canvas2 uid:uid forTrack:AliRtcVideoTrackScreen];
}else if (videoTrack == AliRtcVideoTrackBoth) {
AliVideoCanvas *canvas = [[AliVideoCanvas alloc] init];
canvas.renderMode = AliRtcRenderModeAuto;
canvas.view = [self.remoteUserManager cameraView:uid];
[self.engine setRemoteViewConfig:canvas uid:uid forTrack:AliRtcVideoTrackCamera];
AliVideoCanvas *canvas2 = [[AliVideoCanvas alloc] init];
canvas2.renderMode = AliRtcRenderModeAuto;
canvas2.view = [self.remoteUserManager screenView:uid];
[self.engine setRemoteViewConfig:canvas2 uid:uid forTrack:AliRtcVideoTrackScreen];
}
[self.remoteUserView reloadData];
});
}
collectionView显示远端视频流:
#pragma mark - uicollectionview delegate & datasource
- (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section {
return [self.remoteUserManager allOnlineUsers].count;
}
- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath {
RTCRemoterUserView *cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"cell" forIndexPath:indexPath];
RTCSampleRemoteUserModel *model = [self.remoteUserManager allOnlineUsers][indexPath.row];
AliRenderView *view = model.view;
_yuanDetectView = cell.facedetect;
_yuanHandDetectView = cell.handdetect;
[cell updateUserRenderview:view];
return cell;
}
回调方法:
- 远端视频流回调:
- (void)onRemoteVideoSample:(NSString *)uid videoSource:(AliRtcVideoSource)videoSource videoSample:(AliRtcVideoDataSample *)videoSample {
}
- 本地视频流回调:
- (void)onCaptureVideoSample:(AliRtcVideoSource)videoSource videoSample:(AliRtcVideoDataSample *)videoSample {
}
- 音频流回调:
- (void)onAudioSampleCallback:(AliRtcAudioSource)audioSource audioSample:(AliRtcAudioDataSample *)audioSample {
}
3. 接口的介绍
开始录制&结束录制&结束会议:
//开始录制&结束录制&结束会议
//"action": "START_RECORDING" //START_RECORDING STOP_RECORDING COMPLETED
//START_RECORDING:开始录制接口,在你准备录制的时候调用
//STOP_RECORDING:结束录制接口,在你准备结束录制的时候调用
//COMPLETED:结束会议接口,一般在结束录制之后调用
-(void)recordsChannel:(NSString*) action{
NSDictionary * dic = @{@"Action":@"UpdateLive",@"LiveId":self.liveId,@"UserId":self.userId,@"Status":action};
[idrs_NetWorking HTTPWithMethod:@"POST" body:dic success:^(id _Nonnull responseObject){
NSLog(@"成功:%@",responseObject);
} failure:^(NSError * _Nonnull error) {
NSLog(@"错误%@",error);
}];
}
离开会议,一般是在中途离开会议调用
-(void)exitChannel{
//中途退出房间
NSDictionary * dic = @{@"Action":@"ExitLive",@"Channel":self.channelName,@"UserId":self.userId};
[idrs_NetWorking HTTPWithMethod:@"POST" body:dic success:^(id _Nonnull responseObject) {
NSLog(@"成功:%@",responseObject);
NSMutableDictionary *loginDic = [[NSMutableDictionary alloc]init];
loginDic = responseObject[@"data"];
} failure:^(NSError * _Nonnull error) {
NSLog(@"错误%@",error);
}];
}
退出时还需要调用
//停止本地预览
[self.engine stopPreview];
//离开频道
[self.engine leaveChannel];
//销毁SDK实例
[AliRtcEngine destroy];
AI检测
1. 初始化
-(void)initIDRSSDK{
// AI检测
[IDRSSDK initWithAudioCaptureType:FEED_CAPTURE_AUDIO url:@"http://console.idrs.aliyuncs.com" appId:@"申请的appid" packageName:@"申请的bundle_id" deviceId:@"设备id" success:^(id responseObject) {
self->_idrsSDK = responseObject;
} failure:^(NSError *error) {
NSLog(@"IDRSSDK激活失败%@",error);
}];
//本地人脸检测util
_localFaceDetectUtil = [FaceDetectUtil init];
}
2. AI能力检测
以下代码有删减,只提出用法,不添加逻辑,具体用法以demo中为准。
- 人脸检测
- 用数据流检测:
uint8_t *data = [IDRSUtils convert420PixelBufferToRawData:newBuffer];
IDRSFaceDetectParam *detectParam = [[IDRSFaceDetectParam alloc]init];
detectParam.dataType = IDRSFaceDetectInputTypeChar;
detectParam.data = data;
detectParam.width = videoSample.width;
detectParam.height = videoSample.height;
detectParam.format = 0;
detectParam.inputAngle = 0;
[_idrsSDK faceTrackFromVideo:detectParam faceDetectionCallback:^(NSError *error, NSArray<FaceDetectionOutput *> *faces) {
}];
free(data);
用image检测:
UIImage *image = [self.faceSDK getImageFromRPVideo:newBuffer];
IDRSFaceDetectParam *detectParam = [[IDRSFaceDetectParam alloc]init];
detectParam.dataType = IDRSFaceDetectInputTypeImage;
detectParam.image = image;
detectParam.inputAngle = 0;
detectParam.outputAngle = 0;
detectParam.faceNetType = 0;
detectParam.supportFaceRecognition = isLocalFaceChanged;
detectParam.supportFaceLiveness = isLocalFaceChanged;
[_idrsSDK faceTrackFromImage:detectParam faceDetectionCallback:^(NSError *error, NSArray<FaceDetectionOutput *> *faces) {
}];
用本地相机流检测(远程双录不用此方法检测):
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
IDRSFaceDetectParam *detectParam = [[IDRSFaceDetectParam alloc]init];
detectParam.dataType = IDRSFaceDetectInputTypePixelBuffer;
detectParam.buffer = pixelBuffer;
detectParam.inputAngle = inAngle;
detectParam.outputAngle = outAngle;
[_idrsSDK faceTrackFromVideo:detectParam faceDetectionCallback:^(NSError *error, NSArray<FaceDetectionOutput*> *faces) {
dispatch_async(dispatch_get_main_queue(), ^{
}];
人脸采集
UIImage*face = [weakSelf screenshotFace:newBuffer];
//获取image中的人脸特征
IDRSFaceDetectParam *dete = [[IDRSFaceDetectParam alloc]init];
dete.dataType = IDRSFaceDetectInputTypeImage;
dete.image = face;
dete.inputAngle = 0;
dete.outputAngle = 0;
NSArray<FaceDetectionOutput *> *imageface = [weakSelf.faceSDK detectFace:dete];
//保存投保人的人脸特征
if (imageface.count > 0) {
weakSelf.faceFeature = imageface[0].feature;
//采集成功提示
[weakSelf SuccessShow];
}
人脸对比
NSMutableArray *myFaces=[[NSMutableArray alloc] init];
for (FaceDetectionOutput *face in faces) {
float score =[weakSelf.faceSDK faceRecognitionSimilarity:face.feature feature2:weakSelf.faceFeature];
[myFaces addObject:[NSNumber numberWithFloat:score]];
}
NSDictionary* face_Max = [weakSelf MaxFaceWithArray:myFaces];
if ([face_Max[@"max_number"] floatValue] > 0.5) {
NSString*named = @"";
if (weakSelf.Role == 1 || weakSelf.Role == 3) {
named = @"保险代理";
}else if (weakSelf.Role == 2){
named = @"投保人";
}else{
named = @"受益人";
}
faces[[face_Max[@"max_index"] intValue]].label = named;
}
身份证识别
if (weakSelf.isOCR) {
@autoreleasepool {
// 2. ocr身份证。 注意,这里isFrontCamera 要传NO
IDRSIDCardDetectParam *idCardParam = [[IDRSIDCardDetectParam alloc]init];
idCardParam.dataType = IDRSIDCardInputTypePixelBuffer;
idCardParam.buffer = newBuffer;
NSArray<NSNumber*> *kXMediaOptionsROIKey = @[@(0.2),@(0.2),@(0.6),@(0.6)];
IDCardDetectionOutput *ocrResult = [_idrsSDK detectIDCard:idCardParam roiKey:kXMediaOptionsROIKey rotate:@(0) isFrontCamera:NO isDetectFrontIDCard:YES];
if (ocrResult!=nil && ocrResult.num.length>0) {
// 采集到了身份证信息
writeFrames++;
if (writeFrames>2) {
dispatch_async(dispatch_get_main_queue(), ^{
writeFrames = 0;
//更新meta文件
[self.idrsSDK addPolicy:ocrResult.num title:@"人身保险投保提示书"];
// 显示身份证信息
self.IDCarResult.text = ocrResult.num;
[self SuccessShow];
});
}
}
}
}
活体检测
FaceDetectionOutput *firstResult = [faces firstObject];
int liveType = [self.localFaceDetectUtil getLiveType:faces[0].faceId];
NSString *type = @"";
if (liveType == 0){
type = @"真人";
}else if (liveType == 1){
type = @"翻拍";
}
//后边放具体的操作方法:如果不是真人则弹框提示等等、、、
动作识别:动态手势
if (weakSelf.isHand) {
@autoreleasepool {
IDRSHandDetectParam *handParam = [[IDRSHandDetectParam alloc]init];
handParam.dataType = IDRSHandInputTypeRGBA;
handParam.buffer = newBuffer;
handParam.outAngle = 0;
NSArray<HandDetectionOutput *> *handResults = [_idrsSDK detectHandGesture:handParam];
if(handResults.count > 0) {
if (handResults[0].phone_touched_score>0) {
if(handResults[0].hand_phone_action != 0) {
writeFrames++;
if (writeFrames>2) {
[self SuccessShow];
}
}
//手写框
dispatch_async(dispatch_get_main_queue(), ^{
self.handDetectView.hidden = NO;
CGSize size = CGSizeMake(videoSample.width, videoSample.height);
[self createViewFrame:size];
HandDetectView *handDetectView = self.handDetectView;
handDetectView.presetSize = size;
handDetectView.detectResult = handResults;
});
}
}else{
dispatch_async(dispatch_get_main_queue(), ^{
if (weakSelf.handDetectView.hidden == NO) {
HandDetectView *handDetectView = (HandDetectView*)weakSelf.handDetectView;
handDetectView.hidden = YES;
}
});
}
}
}
动作识别:静态手势
if(self.isStaticHand) {
IDRSHandDetectParam *handParam = [[IDRSHandDetectParam alloc]init];
handParam.dataType = IDRSHandInputTypeRGBA;
handParam.buffer = newBuffer;
handParam.outAngle = 0;
NSArray<HandDetectionOutput *> *handResults = [_idrsSDK detectHandStaticGesture:handParam];
dispatch_async(dispatch_get_main_queue(), ^{
if (handResults.count > 0) {
self.handDetectView.hidden = NO;
CGSize size = CGSizeMake(videoSample.width, videoSample.height);
[self createViewFrame:size];
HandDetectView *handDetectView = self.handDetectView;
handDetectView.presetSize = size;
handDetectView.detectResult = handResults;
HandDetectionOutput *handResult = handResults[0];
if (handResult.hand_action_type == 0 && handResult.hand_static_action > 0) {
if (handResult.hand_static_action == 6) {
NSLog(@"手势:heart——比心");
}else if (handResult.hand_static_action == 12){
NSLog(@"手势:GOOD——大拇指");
}
}
}else{
dispatch_async(dispatch_get_main_queue(), ^{
if (weakSelf.handDetectView.hidden == NO) {
HandDetectView *handDetectView = (HandDetectView*)weakSelf.handDetectView;
handDetectView.hidden = YES;
}
});
}
});
}
人脸框(手势框)
- 手势框与人脸框用法一致
//人脸框位置计算代码位置:FaceDetectView.m
//脸的标识框:
dispatch_async(dispatch_get_main_queue(), ^{
self.detectView.hidden = NO;
CGSize size = CGSizeMake(videoSample.width, videoSample.height);//告诉人脸框返回的视频流的原大小
[self createViewFrame:size];
FaceDetectView *faceDetectView = self.detectView;
faceDetectView.presetSize = size;
faceDetectView.detectResult = faces;
});
3. 激活词识别
激活词使用
if (_isOnNui) {
//回调方法
_idrsSDK.onNuiCallback = ^(NSString *result) {
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(@"------%@",result);
switch (weakSelf.StepNo) {
case 3:
if ([result isEqual: @"\"同意\""]) {
[weakSelf SuccessShow];
}
break;
case 6:
if ([result isEqual: @"\"清楚\""]) {
[weakSelf SuccessShow];
}
default:
break;
}
});
};
[_idrsSDK feedAudioFrame:frame];//此句为设置检测的音频流
[_idrsSDK startDialog];
}
流程控制
demo里的流程控制主要是通过Hander进行控制的,实现类为RemoteHandler,流程控制比较复杂,先介绍下大致的过程。
基本流程介绍
首先包括11个流程
#define START_RECORD @"开始录制"
#define INSURANCE_AGENT @"保险代理"
#define TOU_BAO_REN @"投保人"
#define PRIVACY @"隐私"
#define SELF_INTRODUCYION @"自我介绍"
#define CONTENT_HINT @"内容提示"
#define WARN_RISK @"风险预警"
#define START_SIGN @"签字"
#define START_SIGN1 @"签字1"
#define START_SIGN2 @"签字2"
#define STOP_RECORD @"结束录制"
按照从上到下的顺序进行。
- 然后包括三个角色:
- 保险代理
- 投保人
- 受益人
- 然后每个角色的检测项是可配置的,比如
- 保险代理包括检测项:START_RECORD,INSURANCE_AGENT,PRIVACY,WARN_RISK
- 投保人包括检测项:TOU_BAO_REN
- 受益人包括检测项:SELF_INTRODUCTION,START_SIGN1
- 每个远端代表的角色是可配置的,比如:
- 一端为 :保险代理
- 另一端为 :投保人和受益人
- 这样的话流程配置有很多种的变化,目前demo实现的就是上面介绍的这种,根据需求的不同,需要做相应的改动,这个属于业务逻辑的内容,可以按照demo去做,也可以自己去实现,demo不是最优的方式。
- 大概介绍下demo的实现方式。
result: =-1 :无状态;
=1 :超时;//根据章节去显示不同的超时信息。//所有的失败都是1
=4 :“已采集保代头像”
=5 : “已采集投保人头像”
=6 :身份证号码已识别“
=7 : ”检测到签字动作“;
=14: 激活词成功;
流程同步问题章节,信息同步问题
由于多端远程会议,每端都需要做检测,所以章节需要同步,检测结果也需要同步,比如:
- 目前通过端A和B进行远程双录
A端目前在第2阶段
B端也在第2阶段
A端需要做人脸采集,采集完成后进入第3阶段
这时B端也需要同步进入第三阶段,也需要显示A端人脸采集是否成功,信息的同步
所以设计章节的同步和信息的同步,解决这个问题采用了接口轮询的方式,涉及俩个接口:
- 获取章节
-(void)runLoopGet{
__weak __typeof(self) weakSelf = self;
NSString *method = [NSString stringWithFormat:@"/api/lives/%@/section",self.liveId] ;
[idrs_NetWorking HttpGetWithMethod:method success:^(id _Nonnull responseObject) {
NSLog(@"GET请求返回数据\n%@",responseObject);
NSString * sectionString = responseObject[@"data"][@"section"];
.....//这里判断网络取回的状态是否与本地一样,不一样则本地进行新的状态项
if (self.isLoopRun) {
[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(runLoopGet) userInfo:nil repeats:false];
}
} failure:^(NSError * _Nonnull error) {
//调用失败需处理Error。
NSLog(@"GET请求出错了%@",error);
}];
}
这个接口的作用是不断的轮询,拉取最新的章节和信息,每端都需要轮询此接口,获取最新信息。
- 推送章节
-(void)putMsg:(BOOL)isFirst WithResult:(int)res{
NSNumber *result = [NSNumber numberWithInt:res];
NSString *section = @"";
if (isFirst){
if ([_currentStep isEqual:START_RECORD]) {
section = INSURANCE_AGENT;
}else if ([_currentStep isEqual:INSURANCE_AGENT]){
section = TOU_BAO_REN;
}else if ([_currentStep isEqual:TOU_BAO_REN]){
section = PRIVACY;
}else if ([_currentStep isEqual:PRIVACY]){
section = SELF_INTRODUCYION;
}else if ([_currentStep isEqual:SELF_INTRODUCYION]){
section = CONTENT_HINT;
}else if ([_currentStep isEqual:CONTENT_HINT]){
section = WARN_RISK;
}else if ([_currentStep isEqual:WARN_RISK]){
section = START_SIGN;
}else if ([_currentStep isEqual:START_SIGN]){
section = START_SIGN1;
}else if ([_currentStep isEqual:START_SIGN1]){
section = START_SIGN2;
}else if ([_currentStep isEqual:START_SIGN2]){
section = STOP_RECORD;
}
}else{
section = _currentStep;
}
NSDictionary*dicc = @{@"name":section,@"result":result};
NSData *dataString = [NSJSONSerialization dataWithJSONObject:dicc
options:NSJSONReadingMutableLeaves | NSJSONReadingAllowFragments
error:nil];
NSString *dicString = [[NSString alloc] initWithData:dataString
encoding:NSUTF8StringEncoding];
//提交章节信息。
NSString *method = [NSString stringWithFormat:@"/api/lives/%@/section",_liveId] ;
NSDictionary*dic = @{@"liveId":_liveId,@"section":dicString};
NSLog(@"上传的章节为:%@",dic[@"section"]);
[idrs_NetWorking HttpWithPost_Get:@"PUT" WithMethod:method Body:dic success:^(id _Nonnull responseObject) {
} failure:^(NSError * _Nonnull error) {
NSLog(@"章节出错了%@",error);
}];
}
参数如下:
- isFirst:是否第一次上传此状态
- res:状态码这个的作用是把最新的章节推送到服务器,供其他端获取最新的章节。
- 实现方式:
A端目前在第2阶段
B端也在第2阶段
A端需要做人脸采集,采集完成后进入第3阶段,推送最新的章节到服务器
这时B端不断轮询获取章节接口,拉取到最新章节后更新
活体信息同步和人脸同框同步
- 当一端识别非真人时,需要把所有端都同步为非真人,并且把流程暂停。
- 当一端的角色离开视频的时候,需要同步各端有人离开,并把流程终止。
- 这个的实现涉及三个接口:
1. 上传问题
-(void)putQuestion:(NSString*)event{
//上传服务器
NSString *method = [NSString stringWithFormat:@"/api/lives/%@/event",_liveId] ;
NSDictionary*dic = @{@"event":event,@"userId":self.userId};
[idrs_NetWorking HttpWithPost_Get:@"POST" WithMethod:method Body:dic success:^(id _Nonnull responseObject) {
} failure:^(NSError * _Nonnull error) {
//处理错误信息
}];
}
参数如下:
- event:问题信息(例:投保人为非真人)
- 一端出现了问题,就向服务器上报,服务器会保存异常信息。
2. 删除问题
-(void)deleteQuestion{
NSString *method = [NSString stringWithFormat:@"/api/lives/%@/event",_liveId] ;
NSDictionary*dic = @{@"userId":self.userId};
[idrs_NetWorking HttpWithPost_Get:@"DELETE" WithMethod:method Body:dic success:^(id _Nonnull responseObject) {
} failure:^(NSError * _Nonnull error) {
//处理错误信息
}];
}
某端问题解除,就删除服务器的异常信息。
3. 获取异常信息
-(void)getQuestion{
//获取问题
NSString *method = [NSString stringWithFormat:@"/api/lives/%@/events",_liveId] ;
[idrs_NetWorking HttpWithPost_Get:@"GET" WithMethod:method Body:nil success:^(id _Nonnull responseObject) {
NSDictionary* data = [responseObject objectForKey:@"data"];
NSArray*events = data[@"events"];
//异常信息显示及App层操作处理
if (_isLoopRun) {
[NSTimer scheduledTimerWithTimeInterval:1.0 target:self selector:@selector(getQuestion) userInfo:nil repeats:false];
}
} failure:^(NSError * _Nonnull error) {
//处理报错信息
}];
}
每端都需要不断轮询这个接口查询是否有错误信息,如果有就停止流程,没有就恢复流程。
播放旁白
调用RTC的接口播放旁白
[_engine startAudioAccompanyWithFile:filePath onlyLocalPlay:false replaceMic:false loopCycles:1];
参数介绍
- 播放的MP4文件地址,支持网络的url
- 是否仅本地播放,true表示仅仅本地播放,false表示本地播放且推流到远端。
- 是否替换mic的音频流,true表示伴奏音频流替换本地mic音频流,false表示伴奏音频流和mic音频流同时推
- 循环播放次数,-1表示一直循环。
辅助信息上传
拿到上传的地址url
[idrs_NetWorkingManager ossUpdataWithfileName:_metaFileName block:^(id _Nonnull response, NSError * _Nonnull err) {
NSString* Url = response[@"Data"];
}];
首先调用上方的接口,参数如下
- _metaFileName:你的meta文件的名字
- response:网络请求结果回调
- 回调结果:如果成功会返回一个url,这个url用于实现真正的上传
上方成功拿到上传的url之后,就可以实现真正的上传了
NSString* filePath = [_idrsSDK saveMetaWithfileName:_metaFileName andfilePath:_metaFilePath];
[idrs_NetWorkingManager updataFileWithUrl:url filePath:filePath complete:^(id _Nonnull responseObject, NSError * _Nonnull error) {
[self detections:url];
NSLog(@"上传成功:%@",responseObject);
}];
参数如下
- url:上方返回的url,也是上传的地址
- filePath:本地mate文件的路径
- responseObject:网络请求结果回调
这个如果调用成功,那么就上传成功了