fleshed out references, re-jigged report structure, added some testing methodology

This commit is contained in:
aj 2020-03-11 21:07:23 +00:00
parent 599c405d2d
commit a8c2739cf0
3 changed files with 134 additions and 5 deletions

View File

@ -431,6 +431,19 @@ LiveScan
Cross Reality (XR) Cross Reality (XR)
\end_layout \end_layout
\begin_layout Standard
\begin_inset Flex TODO Note (inline)
status open
\begin_layout Plain Layout
Should this just be on AR?
\end_layout
\end_inset
\end_layout
\begin_layout Standard \begin_layout Standard
Cross reality is a broad term describing the combination of technology with Cross reality is a broad term describing the combination of technology with
a user's experience of their surroundings in order to alter the experience a user's experience of their surroundings in order to alter the experience
@ -602,6 +615,23 @@ Kinect
the environment compounds this. the environment compounds this.
\end_layout \end_layout
\begin_layout Subsubsection
Augmented Reality
\end_layout
\begin_layout Standard
\begin_inset Flex TODO Note (inline)
status open
\begin_layout Plain Layout
Referencing the android UE
\end_layout
\end_inset
\end_layout
\begin_layout Subsection \begin_layout Subsection
Kinect and RGB-D Cameras Kinect and RGB-D Cameras
\end_layout \end_layout
@ -1044,6 +1074,19 @@ OpenCV
Multi-Source Holoportation Multi-Source Holoportation
\end_layout \end_layout
\begin_layout Standard
\begin_inset Flex TODO Note (inline)
status open
\begin_layout Plain Layout
More?
\end_layout
\end_inset
\end_layout
\begin_layout Standard \begin_layout Standard
The space of multi-source holoportation has been explored by The space of multi-source holoportation has been explored by
\begin_inset CommandInset citation \begin_inset CommandInset citation
@ -1191,6 +1234,10 @@ name "fig:World-in-Miniature-group-by-group"
\end_layout \end_layout
\begin_layout Subsection
High Bandwidth Media Streaming
\end_layout
\begin_layout Section \begin_layout Section
LiveScan3D LiveScan3D
\end_layout \end_layout
@ -1352,6 +1399,27 @@ KinectServer
between the server and a client. between the server and a client.
\end_layout \end_layout
\begin_layout Description
TransferServer
\end_layout
\begin_layout Description
TransferSocket
\end_layout
\begin_layout Standard
\begin_inset Flex TODO Note (inline)
status open
\begin_layout Plain Layout
Populate
\end_layout
\end_inset
\end_layout
\begin_layout Standard \begin_layout Standard
This structure can be seen in figure This structure can be seen in figure
\begin_inset CommandInset ref \begin_inset CommandInset ref
@ -1555,6 +1623,14 @@ OpenGL
means that for single sensor setups this is also the location of the camera. means that for single sensor setups this is also the location of the camera.
\end_layout \end_layout
\begin_layout Subsection
\noun on
LiveScan
\noun default
Android
\end_layout
\begin_layout Subsection \begin_layout Subsection
Design Considerations Design Considerations
\end_layout \end_layout
@ -1603,7 +1679,7 @@ Work has been undertaken that allows multiple concurrent TCP connections
\end_layout \end_layout
\begin_layout Section \begin_layout Section
Methodology and Developments Server Developments
\end_layout \end_layout
\begin_layout Standard \begin_layout Standard
@ -2684,6 +2760,14 @@ LiveScan
in a multi-source context. in a multi-source context.
\end_layout \end_layout
\begin_layout Section
Mobile Developments
\end_layout
\begin_layout Section
Testing Methodology
\end_layout
\begin_layout Section \begin_layout Section
Results Results
\end_layout \end_layout
@ -2745,7 +2829,7 @@ Holoportation and multi-source configurations thereof are important technologies
\noun on \noun on
Kinect Kinect
\noun default \noun default
, have accelerated the space. , has accelerated the space.
\end_layout \end_layout
\begin_layout Standard \begin_layout Standard

View File

@ -27,6 +27,17 @@
- Could change behaviour with poor network conditions - Could change behaviour with poor network conditions
* Identification of clients and sources (handshake) * Identification of clients and sources (handshake)
## Testing Methodology
* Bandwidth measures
- Base codebase vs my version
* FPS
* CPU measurements?
* Different source configurations
* Qualitative Measures?
- User experience
- Particular points of immersion/discomfort
## Mid-Year Feedback ## Mid-Year Feedback
* Link citations better * Link citations better

View File

@ -12,6 +12,7 @@
@inproceedings{holoportation, @inproceedings{holoportation,
author = {Orts, Sergio and Rhemann, Christoph and Fanello, Sean and Kim, David and Kowdle, Adarsh and Chang, Wayne and Degtyarev, Yury and Davidson, Philip and Khamis, Sameh and Dou, Minsong and Tankovich, Vladimir and Loop, Charles and Cai, Qin and Chou, Philip and Mennicken, Sarah and Valentin, Julien and Kohli, Pushmeet and Pradeep, Vivek and Wang, Shenlong and Izadi, Shahram}, author = {Orts, Sergio and Rhemann, Christoph and Fanello, Sean and Kim, David and Kowdle, Adarsh and Chang, Wayne and Degtyarev, Yury and Davidson, Philip and Khamis, Sameh and Dou, Minsong and Tankovich, Vladimir and Loop, Charles and Cai, Qin and Chou, Philip and Mennicken, Sarah and Valentin, Julien and Kohli, Pushmeet and Pradeep, Vivek and Wang, Shenlong and Izadi, Shahram},
booktitle = {Proceedings of the 29th Annual Symposium on User Interface Software and Technology},
doi = {10.1145/2984511.2984517}, doi = {10.1145/2984511.2984517},
month = {10}, month = {10},
organization = {Microsoft Research}, organization = {Microsoft Research},
@ -28,7 +29,9 @@
month = {July}, month = {July},
number = {7}, number = {7},
pages = {46--52}, pages = {46--52},
publisher = {IEEE},
title = {Immersive 3D Telepresence}, title = {Immersive 3D Telepresence},
url = {https://ieeexplore.ieee.org/document/6861875/},
volume = {47}, volume = {47},
year = {2014} year = {2014}
} }
@ -42,7 +45,9 @@
month = {April}, month = {April},
number = {4}, number = {4},
pages = {616--625}, pages = {616--625},
publisher = {IEEE},
title = {Immersive Group-to-Group Telepresence}, title = {Immersive Group-to-Group Telepresence},
url = {https://ieeexplore.ieee.org/document/6479190/},
volume = {19}, volume = {19},
year = {2013} year = {2013}
} }
@ -95,23 +100,32 @@
@article{wim, @article{wim,
author = {Stoakley, Richard and Conway, Matthew and Pausch, Y}, author = {Stoakley, Richard and Conway, Matthew and Pausch, Y},
booktitle = {CHI},
doi = {10.1145/223904.223938}, doi = {10.1145/223904.223938},
editor = {Katz, Irvin R. and Mack, Robert L. and Marks, Linn and Rosson, Mary Beth and Nielsen, Jakob},
ee = {https://doi.org/10.1145/223904.223938},
isbn = {0-201-84705-1},
month = {02}, month = {02},
pages = {}, pages = {265--272},
publisher = {ACM/Addison-Wesley},
title = {Virtual Reality on a WIM: Interactive Worlds in Miniature}, title = {Virtual Reality on a WIM: Interactive Worlds in Miniature},
year = {1970} year = {1970}
} }
@article{original-kinect-microsoft, @article{original-kinect-microsoft,
author = {Zhang, Zhengyou}, author = {Zhang, Zhengyou},
issn = {1070-986X}, doi = {10.1109/MMUL.2012.24},
issn = {1941-0166},
journal = {IEEE MultiMedia}, journal = {IEEE MultiMedia},
keywords = {Cameras; Three Dimensional Displays; Sensors; Games; Video Recording; Multimedia; Microsoft Kinect; Human-Computer Interaction; Motion Capture; Computer Vision; Engineering; Computer Science}, keywords = {Cameras; Three Dimensional Displays; Sensors; Games; Video Recording; Multimedia; Microsoft Kinect; Human-Computer Interaction; Motion Capture; Computer Vision; Engineering; Computer Science},
language = {eng}, language = {eng},
month = feb,
number = {2}, number = {2},
pages = {4,10}, number2 = {2},
pages = {4--10},
publisher = {IEEE}, publisher = {IEEE},
title = {Microsoft Kinect Sensor and Its Effect}, title = {Microsoft Kinect Sensor and Its Effect},
url = {https://ieeexplore.ieee.org/document/6190806/},
volume = {19}, volume = {19},
year = {2012-02} year = {2012-02}
} }
@ -134,6 +148,7 @@
@article{greenhouse-kinect, @article{greenhouse-kinect,
author = {Nissimov, Sharon and Goldberger, Jacob and Alchanatis, Victor}, author = {Nissimov, Sharon and Goldberger, Jacob and Alchanatis, Victor},
doi = {10.1016/j.compag.2015.02.001},
issn = {0168-1699}, issn = {0168-1699},
journal = {Computers and Electronics in Agriculture}, journal = {Computers and Electronics in Agriculture},
keywords = {Obstacle Detection; Navigation; Kinect Sensor; Rgb-D; Agriculture}, keywords = {Obstacle Detection; Navigation; Kinect Sensor; Rgb-D; Agriculture},
@ -142,6 +157,7 @@
pages = {104,115}, pages = {104,115},
publisher = {Elsevier B.V}, publisher = {Elsevier B.V},
title = {Obstacle detection in a greenhouse environment using the Kinect sensor}, title = {Obstacle detection in a greenhouse environment using the Kinect sensor},
url = {https://www.sciencedirect.com/science/article/pii/S0168169915000435},
volume = {113}, volume = {113},
year = {2015-04} year = {2015-04}
} }
@ -150,10 +166,12 @@
abstract = {Construction is a high hazard industry which involves many factors that are potentially dangerous to workers. Safety has always been advocated by many construction companies, and they have been working hard to make sure their employees are protected from fatalities and injuries. With the advent of Virtual and Augmented Reality (VR/AR), there has been a witnessed trend of capitalizing on sophisticated immersive VR/AR applications to create forgiving environments for visualizing complex workplace situations, building up risk-preventive knowledge and undergoing training. To better understand the state-of-the-art of VR/AR applications in construction safety (VR/AR-CS) and from which to uncover the related issues and propose possible improvements, this paper starts with a review and synthesis of research evidence for several VR/AR prototypes, products and the related training and evaluation paradigms. Predicated upon a wide range of well-acknowledged scholarly journals, this paper comes up with...}, abstract = {Construction is a high hazard industry which involves many factors that are potentially dangerous to workers. Safety has always been advocated by many construction companies, and they have been working hard to make sure their employees are protected from fatalities and injuries. With the advent of Virtual and Augmented Reality (VR/AR), there has been a witnessed trend of capitalizing on sophisticated immersive VR/AR applications to create forgiving environments for visualizing complex workplace situations, building up risk-preventive knowledge and undergoing training. To better understand the state-of-the-art of VR/AR applications in construction safety (VR/AR-CS) and from which to uncover the related issues and propose possible improvements, this paper starts with a review and synthesis of research evidence for several VR/AR prototypes, products and the related training and evaluation paradigms. Predicated upon a wide range of well-acknowledged scholarly journals, this paper comes up with...},
address = {Amsterdam}, address = {Amsterdam},
author = {Li, Xiao and Yi, Wen and Chi, Hung-Lin and Wang, Xiangyu and Chan, Albert}, author = {Li, Xiao and Yi, Wen and Chi, Hung-Lin and Wang, Xiangyu and Chan, Albert},
doi = {10.1016/j.autcon.2017.11.003},
issn = {0926-5805}, issn = {0926-5805},
journal = {Automation in Construction}, journal = {Automation in Construction},
keywords = {Studies; Augmented Reality; Occupational Safety; Safety Training; Construction Industry; Augmented Reality; Journals; Hard Surfacing; Inspection; Virtual Reality; Occupational Safety; Taxonomy; Hazard Identification; Training}, keywords = {Studies; Augmented Reality; Occupational Safety; Safety Training; Construction Industry; Augmented Reality; Journals; Hard Surfacing; Inspection; Virtual Reality; Occupational Safety; Taxonomy; Hazard Identification; Training},
language = {eng}, language = {eng},
pages = {150--162},
publisher = {Elsevier BV}, publisher = {Elsevier BV},
title = {A critical review of virtual and augmented reality (VR/AR) applications in construction safety}, title = {A critical review of virtual and augmented reality (VR/AR) applications in construction safety},
url = {http://search.proquest.com/docview/2012059651}, url = {http://search.proquest.com/docview/2012059651},
@ -163,9 +181,17 @@
@inproceedings{kinectv1/v2-accuracy-precision, @inproceedings{kinectv1/v2-accuracy-precision,
author = {Wasenm{\"u}ller, Oliver and Stricker, Didier}, author = {Wasenm{\"u}ller, Oliver and Stricker, Didier},
booktitle = {ACCV Workshops (2)},
doi = {10.1007/978-3-319-54427-4_3}, doi = {10.1007/978-3-319-54427-4_3},
editor = {Chen, Chu-Song and Lu, Jiwen and Ma, Kai-Kuang},
ee = {https://doi.org/10.1007/978-3-319-54427-4_3},
month = {11}, month = {11},
pages = {34--45},
publisher = {Springer},
series = {Lecture Notes in Computer Science},
title = {Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy and Precision}, title = {Comparison of Kinect V1 and V2 Depth Images in Terms of Accuracy and Precision},
url = {http://dblp.uni-trier.de/db/conf/accv/accv2016-w2.html#WasenmullerS16},
volume = 10117,
year = {2016} year = {2016}
} }
@ -188,9 +214,14 @@
@inproceedings{velt, @inproceedings{velt,
author = {Fender, Andreas and M{\"u}ller, J{\"o}rg}, author = {Fender, Andreas and M{\"u}ller, J{\"o}rg},
booktitle = {ISS},
doi = {10.1145/3279778.3279794}, doi = {10.1145/3279778.3279794},
editor = {Koike, Hideki and Ratti, Carlo and Takeuchi, Yuichiro and Fukuchi, Kentaro and Scott, Stacey and Plasencia, Diego Mart{\'\i}nez},
ee = {https://doi.org/10.1145/3279778.3279794},
isbn = {978-1-4503-5694-7},
month = {11}, month = {11},
pages = {73--83}, pages = {73--83},
publisher = {ACM},
title = {Velt: A Framework for Multi RGB-D Camera Systems}, title = {Velt: A Framework for Multi RGB-D Camera Systems},
year = {2018} year = {2018}
} }
@ -199,6 +230,8 @@
abstract = {RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually autocalibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.}, abstract = {RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually autocalibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.},
author = {Jones, Brett and Sodhi, Rajinder and Murdock, Michael and Mehra, Ravish and Benko, Hrvoje and Wilson, Andy and Ofek, Eyal and MacIntyre, Blair and Raghuvanshi, Nikunj and Shapira, Lior}, author = {Jones, Brett and Sodhi, Rajinder and Murdock, Michael and Mehra, Ravish and Benko, Hrvoje and Wilson, Andy and Ofek, Eyal and MacIntyre, Blair and Raghuvanshi, Nikunj and Shapira, Lior},
booktitle = {UIST '14 Proceedings of the 27th annual ACM symposium on User interface software and technology}, booktitle = {UIST '14 Proceedings of the 27th annual ACM symposium on User interface software and technology},
ee = {https://doi.org/10.1145/2642918.2647383},
isbn = {978-1-4503-3069-5},
month = {October}, month = {October},
pages = {637--644}, pages = {637--644},
publisher = {ACM}, publisher = {ACM},
@ -228,6 +261,7 @@
number = {2}, number = {2},
pages = {239--256}, pages = {239--256},
title = {A method for registration of 3-D shapes}, title = {A method for registration of 3-D shapes},
url = {https://ieeexplore.ieee.org/document/121791/},
volume = {14}, volume = {14},
year = {1992} year = {1992}
} }