"RIBLA BROADCAST (β)" Review Review A Rich -rich expression is possible easily with only web cam
The avatar operation tool "RIBLA BROADCAST (β)" announced by Avex Technologies has become a big topic.It is a great attraction that it is free and soft released by a major Avex Group.In addition, it has been noted that hand -tracking and automatic expression recognition can be performed without additional devices by performing skeleton estimation and facial expressions with only web cameras (related articles).
Many people are worried about how much we can actually do with a web camera alone.Therefore, in this article, we will examine the feeling of use of "RIBLA BROADCADCAST (β)".
Starting and basic operation
The screen immediately after the software starts is simple.After selecting an avatar from the list on the left of the screen, select a web camera and press "Start" to launch the avatar.There are two avatars prepared by default.If you want to add a new one, drag and drop the VRM file on the screen to automatically introduce it.
When you press "Start", you will move to the main screen and the web camera will be launched.It seems that even a strange posture is covered by skeletal estimation, but if you want a stable operation, I felt that it would be better to stretch your back and wait.
First, the range of motion of the avatar is quite wide.Follow up to an angle close to the side.It is a nice point that not only the neck but also the whole body moves in conjunction, so it is less likely to move unnaturally like "moving to the neck".
However, the reaction is too good or the impression is too much.After turning around and returning to the original position, there was an event that the tracking position was slightly shifted.If you are worried, you can individually adjust how much you tilt or twist from the "face and body inclination".
Here is the movement when the parameter is dropped a little overall.Although flashy movements such as the maximum value setting are not possible, this is a sufficient range of motion and the follow -up is stable.
If the expression automatic recognition is turned on ("AUTO" at the left end of the facial icon), it will read "change of expression" with relatively high accuracy.On the other hand, when you turn sideways, you may be recognized as a "change in facial expressions", or the expression may change at an unintended timing.It may be a premise to use it for "almost front of the front".
In addition, it depends on the processing performance of the PC, but in my environment, when the expression automatic recognition is ON, the movement of the avatar is slightly more light as a whole.If this is turned off, the movement of the avatar will be sharp and the movement will be stable.In this area, it will determine whether or not there is a setting based on "light movement" or "interlocking between yourself and the expression of the avatar".At the time of automatic expression recognition, the expression is changed by pressing the button on the screen or the assigned key in the settings.The assignment of the key to the expression can be set by setting (gear icon at the bottom right of the screen) → "Face setting".
The avatar's mouth park (lip sync) can be set by either a microphone or a recognition from the web camera.It can be set with "Lip Sync Switch" in the list on the left of the screen.If you recognize the web camera, you can express the expression that is closer to the user's expression, such as opening your mouth without saying a voice.On the other hand, for example, if you do a yukubi, the movement will be reflected in the avatar, so if you do not want to reflect the extra movement, it is better to set the sink with a microphone.
You can also change the attitude of the avatar in the display.There are three types: "upright", "seating", and "seating (both hands are fixed under the knee)", and according to the manual, "Seating (both hands are fixed below the knee)" is "when the controller is left in the game live."It is said that a use case is assumed.
Click the gear icon at the bottom right to display the setting screen.Because it is a beta version, some settings are currently unacceptable.The only opportunities to change the main changes are the background setting, the adjustment of the brightness, and the lip sync setting.By the way, various settings can be saved up to three, and you can switch from the main screen at any time.
I changed the background to the preset image.The background can be changed to a single color that can be played freely, the background image prepared with the preset, or the image prepared by yourself.
Press the "F11" key to switch to the full screen display (the same is true even if you click the full screen from the menu).Once the basic settings are completed, switch to the full screen display and import them into OBS as video sources should be the basic usage.
See the ability of hand tracking
Let's take a look at "Hand Tracking by Web Camera alone", which is the centerpiece function.First, the accuracy of moving both arms is quite good.And the accuracy improves as the distance to the camera is far away.Perhaps it is desirable to be recognized by the camera as much as possible in order to perform accurate skeletal estimation.
There are several tools that have achieved hand -tracking on the web camera alone, but the Ribla BroadCast seemed to be strong in hand -shaped recognition.Not only the classic peace sign, but also the sign of rock and roll bent only the middle finger and the ring finger.It seems that the rotation of the arm can be recognized and reflected in the avatar.
On the other hand, some behaviors were seen.When I pose only one arm, it was often seen that the shoulders of those who were not raising their arms would drop significantly.In addition, there were occasional behavior that if you raised only one arm immediately after raising your arms, the arms of those who were not raised were lifted.
In each case, the rate of incidence has dropped slightly when the distance from the camera is released.Nevertheless, if you raise only one arm, the other shoulder will drop slightly, so if you make a natural pose, you will be able to raise the other shoulder.There are probably many scenes that "raise only one hand", so it's a nice place to be modified to be natural.
In addition, the limits of web cameras alone, such as the fact that the distance to the camera is close, it will not work well, the movement that is too early, or the movement such as "match both hands" can not be caught.I was seen.However, it can be said that it is enough to be able to do hand tracking so far with a free tool.
Avatar operation tools that can be handled easily and can expect future potential
The setting items are relatively small, yet the range of motion is quite wide, and it also has rich functions such as automatic recognition and preset storage of setting items, so I felt that it would be easier to create a screen that would be a picture just by using it for the time being.。It is easy to introduce avatars intuitively with dragging and drop.
On the other hand, it is now a beta version, and some unstable parts were seen.Although the reproducibility has not been grasped, the person's recognition function freezes, and some events have not been recognized at all (in this case, he returned to the avatar selection screen and returned by re -selecting an avatar).Also, especially during automatic expression, the operation becomes heavy, so if you are worried about the machine specifications, it would be better to turn off the expression automatic recognition.There are not many fatal bugs, and if you don't assume heavy use of hand tracking, you won't be bothered so much, so I would like to improve the official release.
When I moved it, I felt, "The farther the distance to the camera, the more stable it moves."Especially in hand tracking, the more the distance between yourself and the camera, the more accurate and stability of movement.For this reason, if you operate on the premise of hand tracking, you will have to place a web camera at a remote position, and to automatically recognize your expression or prepare a separate controller.
Another feature is that "Ribla BroadCast (β)" also supports Mac.There are surprisingly few avatar operation tools that operate on the Mac, and it can be said that Mac users have one more powerful options.In any case, this tool is still a beta version, so it is a tool that can be expected to develop in the future.
The project "RIBLA LABORATORY" to which this tool belongs also announces the original character avatar "Ikoma Mill", and "plain clothes" can be used for free, and sales of underwear forms and body models are also started.He is doing a big push for avatar -related businesses.It can be said that it is a future tool, including that.
(Text by Asada Kazura)
● Related link / vtuber tool "RIBLA BROADCADCAST (β)" (BOOTH)