Go to main content
Formate
Cite
Citation
American Psychological Association 7th edition (APA 7th)
🇺🇸 English, US
Richoz, S., Perez-Uribe, A., Birch, P., & Roggen, D. (2019). Benchmarking deep classifiers on mobile devices for vision-based transportation recognition. In Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (pp. 803–807). UbiComp ’19: The 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM. https://doi.org/10.1145/3341162.3344849
Formate
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS

Résumé

Vision-based human activity recognition can provide rich contextual information but has traditionally been computationally prohibitive. We present a characterisation of five convolutional neural networks (DenseNet169, MobileNet, ResNet50, VGG16, VGG19) implemented with TensorFlow Lite running on three state of the art Android mobile phones. The networks have been trained to recognise 8 modes of transportation from camera images using the SHL Locomotion and Transportation dataset. We analyse the effect of thread count and back-ends services (CPU, GPU, Android Neural Network API) to classify the images provided by the rear camera of the phones. We report processing time and classification accuracy.

Einzelheiten

Aktionen

PDF