Author(s):

  • Dargazany, Aras
  • Stegagno, Paolo
  • Mankodiya, Kunal

Abstract:

This work introduces Wearable deep learning (WearableDL) that is a unifying conceptual architecture inspired by the human nervous system, offering the convergence of deep learning (DL), Internet-of-things (IoT), and wearable technologies (WT) as follows: (1) the brain, the core of the central nervous system, represents deep learning for cloud computing and big data processing. (2) The spinal cord (a part of CNS connected to the brain) represents Internet-of-things for fog computing and big data flow/transfer. (3) Peripheral sensory and motor nerves (components of the peripheral nervous system (PNS)) represent wearable technologies as edge devices for big data collection. In recent times, wearable IoT devices have enabled the streaming of big data from smart wearables (e.g., smartphones, smartwatches, smart clothings, and personalized gadgets) to the cloud servers. Now, the ultimate challenges are (1) how to analyze the collected wearable big data without any background information and also without any labels representing the underlying activity; and (2) how to recognize the spatial/temporal patterns in this unstructured big data for helping end-users in decision making process, e.g., medical diagnosis, rehabilitation efficiency, and/or sports performance. Deep learning (DL) has recently gained popularity due to its ability to (1) scale to the big data size (scalability); (2) learn the feature engineering by itself (no manual feature extraction or hand-crafted features) in an end-to-end fashion; and (3) offer accuracy or precision in learning raw unlabeled/labeled (unsupervised/supervised) data. In order to understand the current state-of-the-art, we systematically reviewed over 100 similar and recently published scientific works on the development of DL approaches for wearable and person-centered technologies. The review supports and strengthens the proposed bioinspired architecture of WearableDL. This article eventually develops an outlook and provides insightful suggestions for WearableDL and its application in the field of big data analytics.

Documentation:

https://doi.org/10.1155/2018/8125126

References:
  1. E. R. Kandel, J. Schwartz, and T. M. Jessell, Principles of Neuroscience, McGraw-Hill Education, New York, NY, USA, 4th edition, 2000.
  2. G. E. Moore, “Cramming more components onto integrated circuits,” Proceedings of the IEEE, vol. 86, no. 1, pp. 82–85, 1998. View at: Publisher Site | Google Scholar
  3. “Key investments opportunities for data-driven healthcare,” http://www.topbots.com/5-key-investment-opportunities-data-driven-healthcare-ai/?utm_medium=article utm_source=facebook utm_campaign=aihealthcare5 utm_content=groups. View at: Google Scholar
  4. S. Advani, P. Zientara, N. Shukla et al., “A multitask grocery assist system for the visually impaired: smart glasses, gloves, and shopping carts provide auditory and tactile feedback,” IEEE Consumer Electronics Magazine, vol. 6, no. 1, pp. 73–81, 2017. View at: Publisher Site | Google Scholar
  5. S. Hiremath, G. Yang, and K. Mankodiya, “Wearable internet of things: concept, architectural components and promises for person-centered healthcare,” in Proceedings of 2014 EAI 4th International Conference on Wireless Mobile Communication and Healthcare (Mobihealth), pp. 304–307, Milan, Italy, November 2014. View at: Google Scholar
  6. V. Radu, N. D. Lane, S. Bhattacharya, C. Mascolo, M. K. Marina, and F. Kawsar, “Towards multimodal deep learning for activity recognition on mobile devices,” in Proceedings of ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, pp. 185–188, Heidelberg, Germany, 2016. View at: Google Scholar
  7. S. Bhattacharya and N. D. Lane, “From smart to deep: robust activity recognition on smartwatches using deep learning,” in Proceedings of 2016 IEEE International Conference on Pervasive Computing and Communication Workshops (PerCom Workshops), pp. 1–6, Sydney, Australia, March 2016. View at: Google Scholar
  8. A. Pantelopoulos and N. G. Bourbakis, “A survey on wearable sensor-based systems for health monitoring and prognosis,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 40, no. 1, pp. 1–12, 2010. View at: Publisher Site | Google Scholar
  9. J. Andreu-Perez, C. C. Poon, R. D. Merrifield, S. T. Wong, and G.-Z. Yang, “Big data for health,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 4, pp. 1193–1208, 2015. View at: Publisher Site | Google Scholar
  10. Y.-L. Zheng, X.-R. Ding, C. C. Y. Poon et al., “Unobtrusive sensing and wearable devices for health informatics,” IEEE Transactions on Biomedical Engineering, vol. 61, no. 5, pp. 1538–1554, 2014. View at: Publisher Site | Google Scholar
  11. A. L. Samuel, “Some studies in machine learning using the game of checkers,” IBM Journal of Research and Development, vol. 3, no. 3, pp. 210–229, 1959. View at: Publisher Site | Google Scholar
  12. J. L. Solé, “Book review: pattern recognition and machine learning, Cristopher M. Bishop, information science and statistics, Springer 2006, 738 pages,” SORT-Statistics and Operations Research Transactions, vol. 31, no. 2, 2007. View at: Google Scholar
  13. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. View at: Publisher Site | Google Scholar
  14. J. Hawkins and S. Ahmad, “Why neurons have thousands of synapses, a theory of sequence memory in neocortex,” Frontiers in Neural Circuits, vol. 10, 2016. View at: Publisher Site | Google Scholar
  15. S. Ahmad and J. Hawkins, “How do neurons operate on sparse distributed representations? a mathematical theory of sparsity, neurons and active dendrites,” arXiv preprint arXiv:1601.00720, 2016. View at: Google Scholar
  16. J. Hawkins, S. Ahmad, and D. Dubinsky, “Hierarchical temporal memory including HTM cortical learning algorithms,” Technical report, Numenta, Inc., Redwood City, CA, USA, 2011, http://www.numenta.com/htm-overview/education/HTM/CorticalLearningAlgorithms.pdf. View at: Google Scholar
  17. J. Hawkins, S. Ahmad, and Y. Cui, “Why does the neocortex have layers and columns, a theory of learning the 3d structure of the world,” bioRxiv, p. 162263, 2017. View at: Google Scholar
  18. S. Ahmad and J. Hawkins, “Properties of sparse distributed representations and their application to hierarchical temporal memory,” arXiv preprint arXiv:1503.07469, 2015. View at: Google Scholar
  19. S. Billaudelle and S. Ahmad, “Porting HTM models to the heidelberg neuromorphic computing platform,” arXiv preprint arXiv:1505.02142, 2015. View at: Google Scholar
  20. Y. Cui, S. Ahmad, and J. Hawkins, “The HTM spatial pooler: a neocortical algorithm for online sparse distributed coding,” bioRxiv, p. 085035, 2016. View at: Google Scholar
  21. I. Goodfellow, Y. Bengio, and A. Courville, “Deep learning,” 2016. View at: Google Scholar
  22. Y. Bengio, “Learning deep architectures for ai,” Foundations and Trends in Machine Learning, vol. 2, no. 1, pp. 1–127, 2009. View at: Publisher Site | Google Scholar
  23. P. Andras, “High-dimensional function approximation with neural networks for large volumes of data,” IEEE Transactions on Neural Networks and Learning Systems, vol. 29, no. 2, pp. 500–508, 2017. View at: Publisher Site | Google Scholar
  24. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986. View at: Publisher Site | Google Scholar
  25. D. L. Yamins and J. J. DiCarlo, “Using goal-driven deep learning models to understand sensory cortex,” Nature Neuroscience, vol. 19, no. 3, pp. 356–365, 2016. View at: Publisher Site | Google Scholar
  26. T. P. Lillicrap, D. Cownden, D. B. Tweed, and C. J. Akerman, “Random synaptic feedback weights support error backpropagation for deep learning,” Nature Communications, vol. 7, p. 13276, 2016. View at: Publisher Site | Google Scholar
  27. B. A. Olshausen and D. J. Field, “Emergence of simple-cell receptive field properties by learning a sparse code for natural images,” Nature, vol. 381, no. 6583, p. 607, 1996. View at: Publisher Site | Google Scholar
  28. Y. Freund and R. E. Schapire, “Large margin classification using the perceptron algorithm,” Machine Learning, vol. 37, no. 3, pp. 277–296, 1999. View at: Publisher Site | Google Scholar
  29. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error-propagation,” in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, no. 6088, pp. 318–362, MIT Press, Cambridge, MA, USA, 1986. View at: Google Scholar
  30. R. Collobert, “Deep learning for efficient discriminative parsing,” in Proceedings of AISTATS, vol. 15, pp. 224–232, Ft. Lauderdale, FL, USA, April 2011. View at: Google Scholar
  31. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of Advances in Neural Information Processing Systems, pp. 1097–1105, Lake Tahoe, NV, USA, December 2012. View at: Google Scholar
  32. G. Hinton, L. Deng, D. Yu et al., “Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups,” IEEE Signal Processing Magazine, vol. 29, no. 6, pp. 82–97, 2012. View at: Publisher Site | Google Scholar
  33. Y. Wu, M. Schuster, Z. Chen et al., “Google’s neural machine translation system: bridging the gap between human and machine translation,” arXiv preprint arXiv:1609.08144, 2016. View at: Google Scholar
  34. M. Johnson, M. Schuster, Q. V. Le et al., “Google’s multilingual neural machine translation system: enabling zero-shot translation,” arXiv preprint arXiv:1611.04558, 2016. View at: Google Scholar
  35. “How google detects cancer using deep learning,” http://www.techtimes.com/articles/200311/20170306/here-s-how-google-s-ai-helps-detect-cancer-via-deep-learning.htm. View at: Google Scholar
  36. “Dl advances in medicine,” http://www.nvidia.com/object/deep-learning-in-medicine.html. View at: Google Scholar
  37. “Dl analytics for healthcare: UCSF, intel join forces to develop deep learning analytics for health care,” https://www.ucsf.edu/news/2017/01/405536/ucsf-intel-join-forces-develop-deep-learning-analytics-health-care. View at: Google Scholar
  38. D. H. Hubel and T. N. Wiesel, “Receptive fields of single neurones in the cat’s striate cortex,” Journal of Physiology, vol. 148, no. 3, pp. 574–591, 1959. View at: Publisher Site | Google Scholar
  39. K. Fukushima and S. Miyake, “Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition,” in Competition and Cooperation in Neural Nets, pp. 267–285, Springer, Berlin, Germany, 1982. View at: Google Scholar
  40. S. Sabour, N. Frosst, and G. E. Hinton, “Dynamic routing between capsules,” in Proceedings of Advances in Neural Information Processing Systems, pp. 3856–3866, Long Beach, CA, USA, December 2017. View at: Google Scholar
  41. S. Bartunov, A. Santoro, B. A. Richards, G. E. Hinton, and T. Lillicrap, “Assessing the scalability of biologically-motivated deep learning algorithms and architectures,” arXiv preprint arXiv:1807.04587, 2018. View at: Google Scholar
  42. K. Fukushima, “Cognitron: a self-organizing multilayered neural network,” Biological Cybernetics, vol. 20, no. 3-4, pp. 121–136, 1975. View at: Publisher Site | Google Scholar
  43. K. Fukushima, “Neocognitron: a hierarchical neural network capable of visual pattern recognition,” Neural Networks, vol. 1, no. 2, pp. 119–130, 1988. View at: Publisher Site | Google Scholar
  44. K. Fukushima, “Artificial vision by multi-layered neural networks: neocognitron and its advances,” Neural Networks, vol. 37, pp. 103–119, 2013. View at: Publisher Site | Google Scholar
  45. Y. LeCun, B. Boser, J. S. Denker et al., “Backpropagation applied to handwritten zip code recognition,” Neural Computation, vol. 1, no. 4, pp. 541–551, 1989. View at: Publisher Site | Google Scholar
  46. J. Schmidhuber, “Deep learning in neural networks: an overview,” Neural Networks, vol. 61, pp. 85–117, 2015. View at: Publisher Site | Google Scholar
  47. B. C. Csáji, “Approximation with artificial neural networks,” M.Sc. thesis, Faculty of Sciences, Etvs Lornd University, Hungary, 2001. View at: Google Scholar
  48. G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control, Signals, and Systems (MCSS), vol. 2, no. 4, pp. 303–314, 1989. View at: Publisher Site | Google Scholar
  49. K. Hornik, “Approximation capabilities of multilayer feedforward networks,” Neural Networks, vol. 4, no. 2, pp. 251–257, 1991. View at: Publisher Site | Google Scholar
  50. J. Dean, G. Corrado, R. Monga et al., “Large scale distributed deep networks,” in Proceedings of Advances in Neural Information Processing Systems, pp. 1223–1231, Lake Tahoe, NV, USA, December 2012. View at: Google Scholar
  51. G. E. Hinton, S. Osindero, and Y.-W. Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527–1554, 2006. View at: Publisher Site | Google Scholar
  52. G. E. Hinton, “Deep belief networks,” Scholarpedia, vol. 4, no. 5, p. 5947, 2009. View at: Publisher Site | Google Scholar
  53. G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” Science, vol. 313, no. 5786, pp. 504–507, 2006. View at: Publisher Site | Google Scholar
  54. D. P. Kingma and M. Welling, “Auto-encoding variational bayes,” arXiv preprint arXiv:1312.6114, 2013. View at: Google Scholar
  55. V. Mnih, K. Kavukcuoglu, D. Silver et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015. View at: Publisher Site | Google Scholar
  56. V. Mnih, K. Kavukcuoglu, D. Silver et al., “Playing Atari with deep reinforcement learning,” arXiv preprint arXiv:1312.5602, 2013. View at: Google Scholar
  57. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” California Univ San Diego La Jolla Inst, Oakland, CA, USA, 1985, Technical Report. View at: Google Scholar
  58. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997. View at: Publisher Site | Google Scholar
  59. Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” IEEE Transactions on Neural Networks, vol. 5, no. 2, pp. 157–166, 1994. View at: Publisher Site | Google Scholar
  60. S. Xingjian, Z. Chen, H. Wang, D.-Y. Yeung, W.-k. Wong, and W.-c. Woo, “Convolutional LSTM network: a machine learning approach for precipitation nowcasting,” in Proceedings of Advances in Neural Information Processing Systems, pp. 802–810, Montreal, Canada, December 2015. View at: Google Scholar
  61. Y. Zhang, W. Chan, and N. Jaitly, “Very deep convolutional networks for end-to-end speech recognition,” arXiv preprint arXiv:1610.03022, 2016. View at: Google Scholar
  62. Y. Bengio, A. C. Courville, and P. Vincent, “Unsupervised feature learning and deep learning: a review and new perspectives,” CoRR, abs/1206.5538, vol. 1, 2012. View at: Google Scholar
  63. S. Yeung, V. Ramanathan, O. Russakovsky, L. Shen, G. Mori, and L. Fei-Fei, “Learning to learn from noisy web videos,” arXiv preprint arXiv:1706.02884, 2017. View at: Google Scholar
  64. C. Rupprecht, I. Laina, M. Baust, F. Tombari, G. D. Hager, and N. Navab, “Learning in an uncertain world: representing ambiguity through multiple hypotheses,” arXiv preprint arXiv:1612.00197, 2016. View at: Google Scholar
  65. M. Mirza, A. Courville, and Y. Bengio, “Generalizable features from unsupervised learning,” arXiv preprint arXiv:1612.03809, 2016. View at: Google Scholar
  66. Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle et al., “Greedy layer-wise training of deep networks,” in Proceedings of Advances in Neural Information Processing Systems, vol. 19, p. 153, Vancouver, British Columbia, Canada, December 2007. View at: Google Scholar
  67. I. Goodfellow, J. Pouget-Abadie, M. Mirza et al., “Generative adversarial nets,” in Proceedings of Advances in Neural Information Processing Systems, pp. 2672–2680, Montreal, Canada, December 2014. View at: Google Scholar
  68. C. Finn, X. Y. Tan, Y. Duan, T. Darrell, S. Levine, and P. Abbeel, “Deep spatial autoencoders for visuomotor learning,” in Proceedings of International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, May 2016. View at: Google Scholar
  69. R. S. Sutton, “Temporal credit assignment in reinforcement learning,” Doctoral dissertation, University of Massachusetts Amherst, Amherst, MA, USA, 1984. View at: Google Scholar
  70. D. Silver, A. Huang, C. J. Maddison et al., “Mastering the game of go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, 2016. View at: Publisher Site | Google Scholar
  71. D. Silver, J. Schrittwieser, K. Simonyan et al., “Mastering the game of go without human knowledge,” Nature, vol. 550, no. 7676, p. 354, 2017. View at: Publisher Site | Google Scholar
  72. A. Banino, C. Barry, B. Uria et al., “Vector-based navigation using grid-like representations in artificial agents,” Nature, vol. 557, no. 7705, p. 429, 2018. View at: Publisher Site | Google Scholar
  73. T. J. Wills, F. Cacucci, N. Burgess, and J. O’Keefe, “Development of the hippocampal cognitive map in preweanling rats,” Science, vol. 328, no. 5985, pp. 1573–1576, 2010. View at: Publisher Site | Google Scholar
  74. M. Fyhn, T. Hafting, M. P. Witter, E. I. Moser, and M.-B. Moser, “Grid cells in mice,” Hippocampus, vol. 18, no. 12, pp. 1230–1238, 2008. View at: Publisher Site | Google Scholar
  75. D. Kalashnikov, A. Irpan, P. Pastor et al., “Qt-opt: scalable deep reinforcement learning for vision-based robotic manipulation,” arXiv preprint arXiv:1806.10293, 2018. View at: Google Scholar
  76. C. Finn, P. Christiano, P. Abbeel, and S. Levine, “A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models,” arXiv preprint arXiv:1611.03852, 2016. View at: Google Scholar
  77. A. Y. Ng and S. J. Russell, “Algorithms for inverse reinforcement learning,” in Proceedings of ICML, pp. 663–670, Stanford, CA, USA, June-July 2000. View at: Google Scholar
  78. P. Abbeel and A. Y. Ng, “Apprenticeship learning via inverse reinforcement learning,” in Proceedings of Twenty-First International Conference on Machine Learning, Banff, Alberta, Canada, July 2004. View at: Google Scholar
  79. P. Pieter and A. Y. Ng, “Inverse reinforcement learning,” in Encyclopedia of Machine Learning, pp. 554–558, Springer, Berlin, Germany, 2011. View at: Google Scholar
  80. J. Ho and S. Ermon, “Generative adversarial imitation learning,” in Proceedings of Advances in Neural Information Processing Systems, pp. 4565–4573, Barcelona, Spain, December 2016. View at: Google Scholar
  81. N. Baram, O. Anschel, and S. Mannor, “Model-based adversarial imitation learning,” arXiv preprint arXiv:1612.02179, 2016. View at: Google Scholar
  82. B. Wang, T. Sun, and S. X. Zheng, “Beyond winning and losing: modeling human motivations and behaviors using inverse reinforcement learning,” arXiv preprint arXiv:1807.00366, 2018. View at: Google Scholar
  83. D. Ha and J. Schmidhuber, “Recurrent world models facilitate policy evolution,” arXiv preprint arXiv:1809.01999, 2018. View at: Google Scholar
  84. Y. Zhu, Z. Wang, J. Merel et al., “Reinforcement and imitation learning for diverse visuomotor skills,” arXiv preprint arXiv:1802.09564, 2018. View at: Google Scholar
  85. S. A. Eslami, D. J. Rezende, F. Besse et al., “Neural scene representation and rendering,” Science, vol. 360, no. 6394, pp. 1204–1210, 2018. View at: Publisher Site | Google Scholar
  86. N. D. Lane, S. Bhattacharya, P. Georgiev, C. Forlivesi, and F. Kawsar, “An early resource characterization of deep learning on wearables, smartphones and internet-of-things devices,” in Proceedings of 2015 International Workshop on Internet of Things towards Applications, pp. 7–12, Seoul, South Korea, 2015. View at: Google Scholar
  87. N. Lane and S. Bhattacharya, “Sparsifying deep learning layers for constrained resource inference on wearables,” in Proceedings of 14th ACM Conference on Embedded Network Sensor Systems, pp. 176–189, Stanford, CA, USA, November 2016. View at: Google Scholar
  88. S. Bhattacharya and N. D. Lane, “Sparsification and separation of deep learning layers for constrained resource inference on wearables,” in Proceedings of ACM Conference on Embedded Networked Sensor Systems (SenSys) 2016, Stanford, CA, USA, November 2016. View at: Google Scholar
  89. N. D. Lane, S. Bhattacharya, P. Georgiev et al., “A software accelerator for low-power deep learning inference on mobile devices,” in Proceedings of 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), pp. 1–12, Vienna, Austria, April 2016. View at: Google Scholar
  90. N. D. Lane, S. Bhattacharya, P. Georgiev et al., “Demonstration abstract: accelerating embedded deep learning using DeepX,” in Proceedings of 2016 15th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN), pp. 1-2, Vienna, Austria, 2016. View at: Google Scholar
  91. N. D. Lane, S. Bhattacharya, P. Georgiev et al., “Accelerating embedded deep learning using DeepX: demonstration abstract,” in Proceedings of 15th International Conference on Information Processing in Sensor Networks, p. 61, Vienna, Austria, April 2016. View at: Google Scholar
  92. N. D. Lane, S. Bhattacharya, P. Georgiev, C. Forlivesi, and F. Kawsar, “Demo: accelerated deep learning inference for embedded and wearable devices using DeepX,” in Proceedings of 14th Annual International Conference on Mobile Systems, Applications, and Services Companion, p. 109, Singapore, June 2016. View at: Google Scholar
  93. A. Mathur, N. D. Lane, S. Bhattacharya, A. Boran, C. Forlivesi, and F. Kawsar, Deepeye: Resource Efficient Local Execution of Multiple Deep Vision Models Using Wearable Commodity Hardware, ACM, New York, NY, USA, 2017.
  94. N. D. Lane, P. Georgiev, and L. Qendro, “Deepear: robust smartphone audio sensing in unconstrained acoustic environments using deep learning,” in Proceedings of 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 283–294, Osaka, Japan, September 2015. View at: Google Scholar
  95. C.-F. Chen, G. G. Lee, V. Sritapan, and C.-Y. Lin, “Deep convolutional neural network on iOS mobile devices,” in Proceedings of 2016 IEEE International Workshop on Signal Processing Systems (SiPS), pp. 130–135, Dallas, TX, USA, 2016. View at: Google Scholar
  96. G. M. Harari, N. D. Lane, R. Wang, B. S. Crosier, A. T. Campbell, and S. D. Gosling, “Using smartphones to collect behavioral data in psychological science: opportunities, practical considerations, and challenges,” Perspectives on Psychological Science, vol. 11, no. 6, pp. 838–854, 2016. View at: Google Scholar
  97. N. D. Lane, E. Miluzzo, H. Lu, D. Peebles, T. Choudhury, and A. T. Campbell, “A survey of mobile phone sensing,” IEEE Communications Magazine, vol. 48, no. 9, pp. 140–150, 2010. View at: Publisher Site | Google Scholar
  98. N. D. Lane and P. Georgiev, “Can deep learning revolutionize mobile sensing?” in Proceedings of 16th International Workshop on Mobile Computing Systems and Applications, pp. 117–122, Santa Fe, NM, USA, February 2015. View at: Google Scholar
  99. S. Yao, S. Hu, Y. Zhao, A. Zhang, and T. Abdelzaher, “Deepsense: a unified deep learning framework for time-series mobile sensing data processing,” arXiv preprint arXiv:1611.01942, 2016. View at: Google Scholar
  100. T. Beltramelli and S. Risi, “Deep-spying: spying using smartwatch and deep learning,” arXiv preprint arXiv:1512.05616, 2015. View at: Google Scholar
  101. M. Längkvist, L. Karlsson, and A. Loutfi, “A review of unsupervised feature learning and deep learning for time-series modeling,” Pattern Recognition Letters, vol. 42, pp. 11–24, 2014. View at: Publisher Site | Google Scholar
  102. J. C. B. Gamboa, “Deep learning for time-series analysis,” arXiv preprint arXiv:1701.01887, 2017. View at: Google Scholar
  103. X. Ouyang, C. Zhang, P. Zhou, and H. Jiang, “Deepspace: an online deep learning framework for mobile big data to understand human mobility patterns,” arXiv preprint arXiv:1610.07009, 2016. View at: Google Scholar
  104. M. A. Alsheikh, D. Niyato, S. Lin, H.-P. Tan, and Z. Han, “Mobile big data analytics using deep learning and Apache spark,” IEEE Network, vol. 30, no. 3, pp. 22–29, 2016. View at: Publisher Site | Google Scholar
  105. M. A. Alsheikh, Y. Jiao, D. Niyato, P. Wang, D. Leong, and Z. Han, “The accuracy-privacy tradeoff of mobile crowdsensing,” arXiv preprint arXiv:1702.04565, 2017. View at: Google Scholar
  106. A. Marjovi, A. Arfire, and A. Martinoli, “Extending urban air quality maps beyond the coverage of a mobile sensor network: data sources, methods, and performance evaluation,” in Proceedings of International Conference on Embedded Wireless Systems and Networks, Madrid, Spain, February 2017. View at: Google Scholar
  107. S. Stober, D. J. Cameron, and J. A. Grahn, “Classifying EEG recordings of rhythm perception,” in Proceedings of ISMIR, pp. 649–654, Taipei, Taiwan, 2014. View at: Google Scholar
  108. S. Stober, D. J. Cameron, and J. A. Grahn, “Using convolutional neural networks to recognize rhythm stimuli from electroencephalography recordings,” in Proceedings of Advances in Neural Information Processing Systems, pp. 1449–1457, Montreal, Canada, December 2014. View at: Google Scholar
  109. S. Stober, A. Sternin, A. M. Owen, and J. A. Grahn, “Deep feature learning for EEG recordings,” arXiv preprint arXiv:1511.04306, 2015. View at: Google Scholar
  110. A. Sternin, S. Stober, J. Grahn, and A. Owen, “Tempo estimation from the EEG signal during perception and imagination of music,” in Proceedings of International Workshop on Brain-Computer Music Interfacing/International Symposium on Computer Music Multidisciplinary Research (BCMI/CMMR), Plymouth, UK, June 2015. View at: Google Scholar
  111. D. Wulsin, J. Gupta, R. Mani, J. Blanco, and B. Litt, “Modeling electroencephalography waveforms with semi-supervised deep belief nets: fast classification and anomaly measurement,” Journal of Neural Engineering, vol. 8, no. 3, p. 036015, 2011. View at: Publisher Site | Google Scholar
  112. S. Narejo, E. Pasero, and F. Kulsoom, “EEG based eye state classification using deep belief network and stacked autoencoder,” International Journal of Electrical and Computer Engineering (IJECE), vol. 6, no. 6, pp. 3131–3141, 2016. View at: Publisher Site | Google Scholar
  113. S. Stober, “Learning discriminative features from electroencephalography recordings by encoding similarity constraints,” in Proceedings of Bernstein Conference, London, UK, September 2016. View at: Google Scholar
  114. T. Ma, H. Li, H. Yang et al., “The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing,” Journal of Neuroscience Methods, vol. 275, pp. 80–92, 2017. View at: Publisher Site | Google Scholar
  115. D. Wang and Y. Shang, “Modeling physiological data with deep belief networks,” International Journal of Information and Education Technology (IJIET), vol. 3, no. 5, p. 505, 2013. View at: Google Scholar
  116. M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, N. Seliya, R. Wald, and E. Muharemagic, “Deep learning applications and challenges in big data analytics,” Journal of Big Data, vol. 2, no. 1, p. 1, 2015. View at: Publisher Site | Google Scholar
  117. M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, N. Seliya, R. Wald, and E. Muharemagc, “Deep learning techniques in big data analytics,” in Big Data Technologies and Applications, pp. 133–156, Springer International Publishing, Berlin, Germany, 2016. View at: Google Scholar
  118. D. Xie, L. Zhang, and L. Bai, “Deep learning in visual computing and signal processing,” Applied Computational Intelligence and Soft Computing, vol. 2017, 2017. View at: Publisher Site | Google Scholar
  119. H. Wen, “Vinet: visual-inertial odometry as a sequence-to-sequence learning problem,” in Proceedings of AAAI, Phoenix, AZ, USA, February 2016. View at: Google Scholar
  120. R. Clark, S. Wang, H. Wen, A. Markham, and N. Trigoni, “Vinet: visual-inertial odometry as a sequence-to-sequence learning problem,” arXiv preprint arXiv:1701.08376, 2017. View at: Google Scholar
  121. J. Hannink, T. Kautz, C. F. Pasluosta et al., “Stride length estimation with deep learning,” arXiv preprint arXiv:1609.03321, 2016. View at: Google Scholar
  122. J. Hannink, T. Kautz, C. Pasluosta, K.-G. Gassmann, J. Klucken, and B. Eskofier, “Sensor-based gait parameter extraction with deep convolutional neural networks,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 85–93, 2017. View at: Google Scholar
  123. J. Hannink, T. Kautz, C. Pasluosta et al., “Mobile stride length estimation with deep convolutional neural networks,” IEEE Journal of Biomedical and Health Informatics, vol. 22, no. 2, pp. 354–362, 2018. View at: Publisher Site | Google Scholar
  124. D. Ravi, C. Wong, F. Deligianni et al., “Deep learning for health informatics,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 4–21, 2017. View at: Google Scholar
  125. D. Ravi, C. Wong, B. Lo, and G.-Z. Yang, “Deep learning for human activity recognition: a resource efficient implementation on low-power devices,” in Proceedings of 2016 IEEE 13th International Conference on Wearable and Implantable Body Sensor Networks (BSN), pp. 71–76, San Francisco, CA, USA, June 2016. View at: Google Scholar
  126. D. Ravi, C. Wong, B. Lo, and G.-Z. Yang, “A deep learning approach to on-node sensor data analytics for mobile or wearable devices,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 56–64, 2017. View at: Publisher Site | Google Scholar
  127. D. Ravi, C. Wong, F. Deligianni et al., “Special section on deep learning for biomedical and health informatics,” Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 4–21, 2017. View at: Publisher Site | Google Scholar
  128. A. B. V. dos Santos and D. R. Carvalho, “Deep learning for healthcare management and diagnosis,” Iberoamerican Journal of Applied Computing, vol. 5, no. 2, 2016. View at: Google Scholar
  129. R. Miotto, L. Li, B. A. Kidd, and J. T. Dudley, “Deep patient: an unsupervised representation to predict the future of patients from the electronic health records,” Scientific Reports, vol. 6, no. 1, 2016. View at: Publisher Site | Google Scholar
  130. R. Miotto and C. Weng, “Unsupervised mining of frequent tags for clinical eligibility text indexing,” Journal of Biomedical Informatics, vol. 46, no. 6, pp. 1145–1151, 2013. View at: Publisher Site | Google Scholar
  131. R. Miotto, L. Li, and J. T. Dudley, “Deep learning to predict patient future diseases from the electronic health records,” in Proceedings of European Conference on Information Retrieval, pp. 768–774, Padua, Italy, March 2016. View at: Google Scholar
  132. E. Choi, M. T. Bahadori, L. Song, W. F. Stewart, and J. Sun, “Gram: graph-based attention model for healthcare representation learning,” arXiv preprint arXiv:1611.07012, 2016. View at: Google Scholar
  133. E. Choi, A. Schuetz, W. F. Stewart, and J. Sun, “Using recurrent neural network models for early detection of heart failure onset,” Journal of the American Medical Informatics Association, vol. 24, no. 2, pp. 361–370, 2017. View at: Publisher Site | Google Scholar
  134. E. Choi, M. T. Bahadori, E. Searles et al., “Multi-layer representation learning for medical concepts,” in Proceedings of 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1495–1504, San Francisco, CA, USA, August 2016. View at: Google Scholar
  135. E. Choi, A. Schuetz, W. F. Stewart, and J. Sun, “Medical concept representation learning from electronic health records and its application on heart failure prediction,” arXiv preprint arXiv:1602.03686, 2016. View at: Google Scholar
  136. E. Choi, M. T. Bahadori, and J. Sun, “Doctor AI: predicting clinical events via recurrent neural networks,” arXiv preprint arXiv:1511.05942, 2015. View at: Google Scholar
  137. P. Nguyen, T. Tran, N. Wickramasinghe, and S. Venkatesh, “Deepr: a convolutional net for medical records,” IEEE Journal of Biomedical and Health Informatics, vol. 21, no. 1, pp. 22–30, 2016. View at: Publisher Site | Google Scholar
  138. S. Nemati, M. M. Ghassemi, and G. D. Clifford, “Optimal medication dosing from suboptimal clinical examples: a deep reinforcement learning approach,” in Proceedings of 2016 IEEE 38th Annual International Conference of the Engineering in Medicine and Biology Society (EMBC), pp. 2978–2981, Orlando, FL, USA, August 2016. View at: Google Scholar
  139. T. Pham, T. Tran, D. Phung, and S. Venkatesh, “Deepcare: a deep dynamic memory model for predictive medicine,” in Proceedings of Pacific-Asia Conference on Knowledge Discovery and Data Mining, pp. 30–41, Auckland, New Zealand, April 2016. View at: Google Scholar
  140. S. P. Shashikumar, A. J. Shah, Q. Li, G. D. Clifford, and S. Nemati, “A deep learning approach to monitoring and detecting atrial fibrillation using wearable technology,” in Proceedings of 2017 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), pp. 141–144, Orlando, FL, USA, February 2017. View at: Google Scholar
  141. M. Poggi, L. Nanni, and S. Mattoccia, “Crosswalk recognition through point-cloud processing and deep-learning suited to a wearable mobility aid for the visually impaired,” in Proceedings of International Conference on Image Analysis and Processing, pp. 282–289, Genoa, Italy, September 2015. View at: Google Scholar
  142. M. Poggi and S. Mattoccia, “A wearable mobility aid for the visually impaired based on embedded 3d vision and deep learning,” in Proceedings of 2016 IEEE Symposium on Computers and Communication (ISCC), pp. 208–213, Messina, Italy, June 2016. View at: Google Scholar
  143. S. Ji, W. Xu, M. Yang, and K. Yu, “3D convolutional neural networks for human action recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 1, pp. 221–231, 2013. View at: Publisher Site | Google Scholar
  144. H. Du, M. M. Ghassemi, and M. Feng, “The effects of deep network topology on mortality prediction,” in Proceedings of 2016 IEEE 38th Annual International Conference of Engineering in Medicine and Biology Society (EMBC), pp. 2602–2605, Orlando, FL, USA, 2016. View at: Google Scholar
  145. T. Alhanai and M. M. Ghassemi, Predicting Latent Narrative Mood Using Audio and Physiologic Data, MIT, Cambridge, MA, USA, 2017.
  146. A.-r. Mohamed, G. Dahl, and G. Hinton, “Acoustic modeling using deep belief networks,” IEEE Transactions on Audio, Speech, and Language Processing, no. 99, p. 1, 2010. View at: Google Scholar
  147. B. Furht and F. Villanustre, Big Data Technologies and Applications, Springer, Berlin, Germany, 2016.
  148. N. Sultan, “Reflective thoughts on the potential and challenges of wearable technology for healthcare provision and medical education,” International Journal of Information Management, vol. 35, no. 5, pp. 521–526, 2015. View at: Publisher Site | Google Scholar
  149. S. L. Halson, J. M. Peake, and J. P. Sullivan, “Wearable technology for athletes: information overload and pseudoscience?” International Journal of Sports Physiology and Performance, vol. 11, no. 6, pp. 705-706, 2016. View at: Publisher Site | Google Scholar
  150. J. M. Ortman, V. A. Velkoff, H. Hogan et al., An Aging Nation: the Older Population in the United States, United States Census Bureau, Economics and Statistics Administration, US Department of Commerce, Suitland, MD, USA, 2014.