Based on this ear scan, Apple then creates an individual surround sound profile for the respective person. Owners of newer iPhones with the TrueDepth camera system can scan their ears. Spatial sound and spatial measurementsĪpple also wants to improve the surround sound with iOS 16. Apple relies on a similar process like AMD (Spatial Upscaling for Super Resolution) instead of the so far higher-quality AI upscaling like Nvidia (DLSS). The new M2 chip is supposed to be able to display current games like Resident Evil 8 in high resolution and smoothly – also thanks to a new upscaling process for Metal. Apple is reportedly working with Hollywood directors on video content for the headset.Īpple dedicated a part of the keynote to gaming, traditionally a domain of Windows PCs. It also offers 50 percent more memory bandwidth (100 GB/s) and up to 24 GB of unified memory.Īccording to Apple, the M2 can stream multiple 4K and 8K videos simultaneously, which could be interesting for XR video broadcasts of sporting events or for office work on the Mac with XR extensions, for example. However, the M2 will first be used in Apple’s new MacBook Air and MacBook Pro (13-inch).Īccording to Apple, the 5-nanometer chip will outperform the M1 by 18 percent performance per watt in CPU tasks, 35 percent in graphics and 40 percent in AI calculations (“Neural Engine”). This makes the M2 an obvious choice for Apple’s possible mixed reality headset. Potential headset processor and games upscaling.Īpple revealed the M2 processor, which is optimized for high performance with the lowest possible power consumption. Still, there were some announcements that could be relevant for Apple’s future XR strategy. ![]() Apple’s keynote speakers and chief Tim Cook generally didn’t get carried away talking about AR and VR. As suspected in advance, there were no details about the potential mixed reality headset at Apple’s WWDC keynote.
0 Comments
Leave a Reply. |