Web AR: Technologies Making It Possible
Technologies for Web AR:
Web technologies nowadays are emerging to meet the basic requirements of Web AR, and, moreover, also provide performance improvement approaches.
Refer fig.
Credits: https://caniuse.com
WebRTC, WebAssembly, Web Workers, WebGL.
We 'll discuss each of the technologies in detail
WebRTC
Using WebRTC, we can add real-time communication capabilities to our application that works on top of an open standard. It supports video, voice, and generic data to be sent between peers, allowing developers to build powerful voice- and video communication solutions. The technology is available on all modern browsers as well as on native clients for all major platforms. The technologies behind WebRTC are implemented as an open web standard and available as regular JavaScript APIs in all major browsers. For native clients, like Android and iOS applications, a library is available that provides the same functionality
Credits: Fullstack Academy
The WebRTC standard covers, on a high level, two different technologies: media capture devices and peer-to-peer connectivity.
Media capture devices include video cameras and microphones, but also screen capturing “devices”. For cameras and microphones, we use navigator.mediaDevices.getUserMedia() to capture MediaStreams.
For screen recording, we use navigator.mediaDevices.getDisplayMedia() instead. The peer-to-peer connectivity is handled by the RTCPeerConnection interface. This is the central point for establishing and controlling the connection between two peers in WebRTC.
WebRTC standard provides APIs for accessing cameras and microphones and can be accessed with JavaScript through the navigator.mediaDevices object, which implements the MediaDevices interface. From this object, we can enumerate all connected devices, listen for device changes (when a device is connected or disconnected), and open a device to retrieve a Media Stream. The most common way this is used is through the function getUserMedia(), which returns a promise that will resolve to a MediaStream for the matching media devices.
Querying media devices:
In a more complex application, we will most likely want to check all the connected cameras and microphones, this can be done by calling the function enumerateDevices(). This will return a promise that resolves to an array of MediaDevicesInfo that describe each known media device. We can use this to present a UI to the user which lets them pick the one they prefer. Each MediaDevicesInfo contains a property named kind with the value audioinput, audiooutput or videoinput, indicating what type of media device it is.
Listening for devices changes:
Most computers support plugging in various devices during runtime. It could be a webcam connected by USB, a Bluetooth headset, or a set of external speakers. In order to properly support this, a web application should listen for the changes of media devices. This can be done by adding a listener to navigator.mediaDevices for the device change event.
Media constraints:
The constraints object, which must implement the MediaStreamConstraints interface, that we pass as a parameter to getUserMedia() allows us to open a media device that matches a certain requirement. It is recommended that applications that use the getUserMedia() API first check the existing devices and then specifies a constraint that matches the exact device using the deviceId constraint.
Local playback:
Once a media device has been opened and we have a MediaStream available, we can assign it to a video or audio element to play the stream locally.
he HTML needed for a typical video element used with getUserMedia() will usually have the attributes autoplay and playsinline. The autoplay attribute will cause new streams assigned to the element to play automatically. The playsinline attribute allows video to play inline, instead of only in full screen, on certain mobile browsers.
To Learn more on the WebAR technologies please visit https://medium.com/@soumil25