文献の詳細
論文の言語 | 英語 |
---|---|
著者 | Yusuke Oguma and Koichi Kise |
論文名 | Media-Independent Stamp-Based Document Annotation Using DocumentImage Retrieval |
論文誌名 | Proc. of the 1st International Workshop on Visual Recognition and Retrieval for Mixed and Augmented Reality |
ページ数 | 4 pages |
発表場所 | Fukuoka, Japan |
査読の有無 | 有 |
発表の種類 | 口頭発表 |
年月 | 2015年10月 |
要約 | In recent years, electronic documents have become popular. One of the advantages of electronic documents is that they provide a method of putting and sharing annotations on documents. However the market size of paper documents is still much larger than electronic documents and we continue to use them. We consider that it is better to have a method of annotation applicable not only to electronic documents but also to paper documents. In this paper we propose a method of annotating both electronic and paper documents by capturing them as images. We use a smartphone as a device and make the method work in real time. As a way of annotation, we propose to use ”stamps”, which are pictorial icons representing opinions of readers (like, dislike, difficult, interesting, etc.). This helps readers to put annotations more easily as compared to text-based annotations. |
- BibTeX用エントリー
@InProceedings{Kise2015, author = {Yusuke Oguma and Koichi Kise}, title = {Media-Independent Stamp-Based Document Annotation Using DocumentImage Retrieval}, booktitle = {Proc. of the 1st International Workshop on Visual Recognition and Retrieval for Mixed and Augmented Reality}, year = 2015, month = oct, numpages = {4}, location = {Fukuoka, Japan} }