Science and Tech

Google has a plan to improve Meet’s portrait mode: use your PC’s GPU

Google has a plan to improve Meet's portrait mode: use your PC's GPU

If you frequently meet on Google Meet, an app that returns to its pre-pandemic limits, you surely know the different effects that it allows. Among them, use portrait mode in real time or isolate yourself in an artificial background, as if you were on a chroma key. Google knows how to take advantage of its AI to achieve this type of function, but it’s not just about software.

With the latest version of Google Meet, Google wants to squeeze more power from the GPU to improve the result. This is achieved by combining Google’s own artificial intelligence with WebGL, a standard that allows the browser to render graphics.

Your GPU, at the service of Google Meet

Google want better quality in image segmentation in Meet, and for this it uses the GPU of the PCs. With the latest update of Meet, there are relevant changes in the model. From now on, a new segmentation model based on high resolution (HD) input images is implemented, instead of low resolution images as previously used.

In order not to consume excessive resources, the low-performance cores of the GPU are used

In order not to consume excessive resources on your PC, Meet will use low performance cores, ideal for processing high-resolution convolutional models. Until now, the PC’s CPU was being used in combination with Google’s real-time AI models, but this element was more limited for HD graphics segmentation calculations.

As Google explains, it is not so easy to make the GPU take advantage of its performance to improve image segmentation. He comments that currently the GPU can only achieve 25% of the raw performance that OpenGL is capable of. This is because WebGL (the standard that allows websites to render with your GPU), was designed for raw rendering of imagesnot for the heavy workloads that a machine learning model generates.

12 free applications to make group video calls

The key to overcome this limitation is called MRT (Multiple Render Tags), a function of current GPUs that allows reduce the bandwidth needed by Google’s neural networkto process (as promised by Google) with up to 90% of the native power of OpenGL.

Source link