This Week in 3D Imaging

This Week in 3D Imaging

Research Highlights:

No alt text provided for this image


Oxford University researchers put forth a technique for classifying 3D deformable objects from unprocessed single-view photos. The technique is built on an autoencoder that takes into account depth, albedo, perspective, and illumination for each input image. Their tests demonstrate that this technique can recover the 3D shape of human faces, cat faces, and cars from single-view photographs quite correctly, without the need for supervision or a preexisting shape model. They outperform another method that employs supervision at the level of 2D picture correspondences on benchmarks in terms of accuracy.

No alt text provided for this image

A truly differentiable rendering framework was proposed by researchers from the University of Southern California. It can render colorized meshes directly using differentiable functions and back-propagate effective supervision signals to mesh vertices and their attributes from a variety of image representations, such as silhouette, shading, and color images. They demonstrated how utilizing the suggested renderer can significantly improve 3D unsupervised single-view reconstruction in terms of both quality and quantity.

No alt text provided for this image

A new technology that creates 3D images using ultrasound may improve the accuracy of thermal ablation therapy for the treatment of liver cancer, according to a simulation study conducted by researchers at Western University and Lawson Health Research Institute. While surgery is one option for therapy, thermal ablation, which uses heat to remove the cancerous tumor, may be less risky and require less recovery time. Patients who are not surgical candidates for a variety of reasons can also use it. In order to treat the cancer without endangering the important organs and blood arteries nearby, thermal ablation also necessitates exact needle positioning.

No alt text provided for this image
Voxel map of a cross section of a human heart. (Credit: Nicholas Jacobson)

A group of researchers from the University of Colorado have created a new method for converting medical images, such CT or MRI scans, into extraordinarily detailed 3D models on the computer. The development is a significant step in printing realistic human anatomy models that doctors may manipulate in the real world by pressing, poking, and prodding. A Digital Imaging and Communications in Medicine (DICOM) file, the common 3D data that CT and MRI scans produce, serves as the foundation of the method. MacCurdy and his coworkers turn that data into voxels using specialized software, thereby chopping an organ into tiny cubes with a volume far smaller than a common tear drop.

要查看或添加评论,请登录

Stratovan Corporation的更多文章

社区洞察

其他会员也浏览了