Hide my Gaze with EOG! Towards Closed-Eye Gaze Gesture Passwords that Resist Observation-Attacks with Electrooculography in Smart Glasses.
Findling, R. D.; Quddus, T.; and Sigg, S.
In
17th International Conference on Advances in Mobile Computing and Multimedia, 2019.
paper
link
bibtex
abstract
6 downloads
@InProceedings{Findling_19_HidemyGaze,
author = {Rainhard Dieter Findling and Tahmid Quddus and Stephan Sigg},
booktitle = {17th International Conference on Advances in Mobile Computing and Multimedia},
title = {Hide my Gaze with {EOG}! {T}owards Closed-Eye Gaze Gesture Passwords that Resist Observation-Attacks with Electrooculography in Smart Glasses},
year = {2019},
abstract = {Smart glasses allow for gaze gesture passwords as a hands-free form of mobile authentication. However, pupil movements for password input are easily observed by attackers, who thereby can derive the password. In this paper we investigate closed-eye gaze gesture passwords with EOG sensors in smart glasses. We propose an approach to detect and recognize closed-eye gaze gestures, together with a 7 and 9 character gaze gesture alphabet. Our evaluation indicates good gaze gesture detection rates. However, recognition is challenging specifically for vertical eye movements with 71.2\%-86.5\% accuracy and better results for opened than closed eyes. We further find that closed-eye gaze gesture passwords are difficult to attack from observations with 0% success rate in our evaluation, while attacks on open eye passwords succeed with 61\%. This indicates that closed-eye gaze gesture passwords protect the authentication secret significantly better than their open eye counterparts.},
url_Paper = {http://ambientintelligence.aalto.fi/paper/findling_closed_eye_eog.pdf},
project = {hidemygaze},
group = {ambience}
}
Smart glasses allow for gaze gesture passwords as a hands-free form of mobile authentication. However, pupil movements for password input are easily observed by attackers, who thereby can derive the password. In this paper we investigate closed-eye gaze gesture passwords with EOG sensors in smart glasses. We propose an approach to detect and recognize closed-eye gaze gestures, together with a 7 and 9 character gaze gesture alphabet. Our evaluation indicates good gaze gesture detection rates. However, recognition is challenging specifically for vertical eye movements with 71.2%-86.5% accuracy and better results for opened than closed eyes. We further find that closed-eye gaze gesture passwords are difficult to attack from observations with 0% success rate in our evaluation, while attacks on open eye passwords succeed with 61%. This indicates that closed-eye gaze gesture passwords protect the authentication secret significantly better than their open eye counterparts.
Tennis Stroke Classification: Comparing Wrist and Racket as IMU Sensor Position.
Ebner, C. J.; and Findling, R. D.
In
17th International Conference on Advances in Mobile Computing and Multimedia, 2019.
paper
link
bibtex
abstract
1 download
@InProceedings{Ebner_19_TennisStrokeClassification,
author = {Christopher J. Ebner and Rainhard Dieter Findling},
booktitle = {17th International Conference on Advances in Mobile Computing and Multimedia},
title = {Tennis Stroke Classification: Comparing Wrist and Racket as IMU Sensor Position},
year = {2019},
abstract = {Automatic tennis stroke recognition can help tennis players improve their training experience. Previous work has used sensors positions on both wrist and tennis racket, of which different physiological aspects bring different sensing capabilities. However, no comparison of the performance of both positions has been done yet. In this paper we comparatively assess wrist and racket sensor positions for tennis stroke detection and classification. We investigate detection and classification rates with 8 well-known stroke types and visualize their differences in 3D acceleration and angular velocity. Our stroke detection utilizes a peak detection with thresholding and windowing on the derivative of sensed acceleration, while for our stroke recognition we evaluate different feature sets and classification models. Despite the different physiological aspects of wrist and racket as sensor position, for a controlled environment results indicate similar performance in both stroke detection (98.5\%-99.5\%) and user-dependent and independent classification (89\%-99\%).},
url_Paper = {http://ambientintelligence.aalto.fi/paper/Tennis_Stroke_Recognition.pdf},
group = {ambience}}
Automatic tennis stroke recognition can help tennis players improve their training experience. Previous work has used sensors positions on both wrist and tennis racket, of which different physiological aspects bring different sensing capabilities. However, no comparison of the performance of both positions has been done yet. In this paper we comparatively assess wrist and racket sensor positions for tennis stroke detection and classification. We investigate detection and classification rates with 8 well-known stroke types and visualize their differences in 3D acceleration and angular velocity. Our stroke detection utilizes a peak detection with thresholding and windowing on the derivative of sensed acceleration, while for our stroke recognition we evaluate different feature sets and classification models. Despite the different physiological aspects of wrist and racket as sensor position, for a controlled environment results indicate similar performance in both stroke detection (98.5%-99.5%) and user-dependent and independent classification (89%-99%).
CORMORANT: On Implementing Risk-Aware Multi-Modal Biometric Cross-Device Authentication For Android.
Hintze, D.; Füller, M.; Scholz, S.; Findling, R. D.; Muaaz, M.; Kapfer, P.; Nüssler, W.; and Mayrhofer, R.
In
17th International Conference on Advances in Mobile Computing and Multimedia, 2019.
paper
link
bibtex
abstract
2 downloads
@InProceedings{Hintze_19_CORMORANTImplementingRisk,
author = {Daniel Hintze and Matthias F\"uller and Sebastian Scholz and Rainhard Dieter Findling and Muhammad Muaaz and Philipp Kapfer and Wilhelm N\"ussler and Ren\'e Mayrhofer},
booktitle = {17th International Conference on Advances in Mobile Computing and Multimedia},
title = {CORMORANT: On Implementing Risk-Aware Multi-Modal Biometric Cross-Device Authentication For Android},
year = {2019},
abstract = {This paper presents the design and open source implementation of CORMORANT , an Android authentication framework able to increase usability and security of mobile authentication. It uses transparent behavioral and physiological biometrics like gait, face, voice, and keystrokes dynamics to continuously evaluate the user’s identity without explicit interaction. Using signals like location, time of day, and nearby devices to assess the risk of unauthorized access, the required level of confidence in the user’s identity is dynamically adjusted. Authentication results are shared securely, end-to-end encrypted using the Signal messaging protocol, with trusted devices to facilitate cross-device authentication for co-located devices, detected using Bluetooth low energy beacons. CORMORANT is able to reduce the authentication overhead by up to 97\% compared to conventional knowledge-based authentication whilst increasing security at the same time. We share our perspective on some of the successes and shortcomings we encountered implementing and evaluating CORMORANT to hope to inform others working on similar projects.},
url_Paper = {http://ambientintelligence.aalto.fi/paper/Hintze_19_CORMORANTImplementingRisk_cameraReady.pdf},
group = {ambience}}
This paper presents the design and open source implementation of CORMORANT , an Android authentication framework able to increase usability and security of mobile authentication. It uses transparent behavioral and physiological biometrics like gait, face, voice, and keystrokes dynamics to continuously evaluate the user’s identity without explicit interaction. Using signals like location, time of day, and nearby devices to assess the risk of unauthorized access, the required level of confidence in the user’s identity is dynamically adjusted. Authentication results are shared securely, end-to-end encrypted using the Signal messaging protocol, with trusted devices to facilitate cross-device authentication for co-located devices, detected using Bluetooth low energy beacons. CORMORANT is able to reduce the authentication overhead by up to 97% compared to conventional knowledge-based authentication whilst increasing security at the same time. We share our perspective on some of the successes and shortcomings we encountered implementing and evaluating CORMORANT to hope to inform others working on similar projects.
Predicting the Category of Fire Department Operations.
Pirklbauer, K.; and Findling, R. D.
In
Emerging Research Projects and Show Cases Symposium (SHOW 2019), 2019.
paper
link
bibtex
abstract
1 download
@InProceedings{Pirklbauer_19_PredictingCategoryFire,
author = {Kevin Pirklbauer and Rainhard Dieter Findling},
booktitle = {Emerging Research Projects and Show Cases Symposium ({SHOW} 2019)},
title = {Predicting the Category of Fire Department Operations},
year = {2019},
abstract = {Voluntary fire departments have limited human and material resources. Machine learning aided prediction of fire department operation details can benefit their resource planning and distribution. While there is previous work on predicting certain aspects of operations within a given operation category, operation categories themselves have not been predicted yet. In this paper we propose an approach to fire department operation category prediction based on location, time, and weather information, and compare the performance of multiple machine learning models with cross validation. To evaluate our approach, we use two years of fire department data from Upper Austria, featuring 16.827 individual operations, and predict its major three operation categories. Preliminary results indicate a prediction accuracy of 61\%. While this performance is already noticeably better than uninformed prediction (34% accuracy), we intend to further reduce the prediction error utilizingmore sophisticated features and models.},
url_Paper = {http://ambientintelligence.aalto.fi/paper/momm2019_fire_department_operation_prediction.pdf},
group = {ambience}
}
Voluntary fire departments have limited human and material resources. Machine learning aided prediction of fire department operation details can benefit their resource planning and distribution. While there is previous work on predicting certain aspects of operations within a given operation category, operation categories themselves have not been predicted yet. In this paper we propose an approach to fire department operation category prediction based on location, time, and weather information, and compare the performance of multiple machine learning models with cross validation. To evaluate our approach, we use two years of fire department data from Upper Austria, featuring 16.827 individual operations, and predict its major three operation categories. Preliminary results indicate a prediction accuracy of 61%. While this performance is already noticeably better than uninformed prediction (34% accuracy), we intend to further reduce the prediction error utilizingmore sophisticated features and models.
With whom are you talking? Privacy in Speech Interfaces.
Backstrom, T.; Das, S.; Zarazaga, P. P.; Sigg, S.; Findling, R.; and Laakasuo, M.
In
Proceedings of the 4th annual conference of the MyData Global network (MyData 2019), Helsinki, Finland, September 2019.
link
bibtex
abstract
@inproceedings{Backstrom_2019_MyData,
author = {Tom Backstrom and Sneha Das and Pablo Perez Zarazaga and Stephan Sigg and Rainhard Findling and Michael Laakasuo},
title = {With whom are you talking? Privacy in Speech Interfaces},
booktitle = {Proceedings of the 4th annual conference of the MyData Global network ({MyData} 2019)},
year = {2019},
address = {Helsinki, Finland},
month = sep,
abstract = {Speech is about interaction. It is more than just passing messages – the listener nods and finishes the sentence for you. Interaction is so essentially a part of normal speech, that non-interactive speech has its own name: it is a monologue. It's not normal. Normal speech is about interaction. Privacy is a very natural part of such spoken interactions. We intuitively lower our voices to a whisper when we want to tell a secret. We thus change the way we speak depending on the level of privacy. In a public speech, we would not reveal intimate secrets. We thus change the content of our speech depending on the level of privacy. Furthermore, in a cafeteria, we would match our speaking volume to the background noise. We therefore change our speech in an interaction with the surroundings. Overall, we change both the manner of speaking and its content, in an interaction with our environment. Our research team is interested in the question of how such notions of privacy should be taken into account in the design of speech interfaces, such as Alexa/Amazon, Siri/Apple, Google and Mycroft. We believe that in the design of good user-interfaces, you should strive for technology which is intuitive to use. If your speech assistant handles privacy in a similar way as a natural person does, then most likely it would feel natural to the user. A key concept for us is modelling the users’ experience of privacy. Technology should understand our feelings towards privacy, how we experience it and act accordingly. From the myData-perspective, this means that all (speech) data is about interactions, between two or more parties. Ownership of such data is then also shared among the participating parties. There is no singular owner of data, but access and management of data must always happen in mutual agreement. In fact, the same applies to many other media as well. It is obvious that chatting on WhatsApp is a shared experience. Interesting (=good) photographs are those which entail a story; "This is when we went to the beach with Sophie." The myData concept should be adapted to take into account such frequently appearing real-life data. In our view, data becomes more interesting when it is about an interaction. In other words, since interaction is so central to our understanding of the world, it should then also be reflected in our data representations. To include the most significant data, we should turn our attention from myData to focus on ourData. Here, the importance of data is then dependent on, and even defined by, with whom are you talking?},
group = {ambience}}
Speech is about interaction. It is more than just passing messages – the listener nods and finishes the sentence for you. Interaction is so essentially a part of normal speech, that non-interactive speech has its own name: it is a monologue. It's not normal. Normal speech is about interaction. Privacy is a very natural part of such spoken interactions. We intuitively lower our voices to a whisper when we want to tell a secret. We thus change the way we speak depending on the level of privacy. In a public speech, we would not reveal intimate secrets. We thus change the content of our speech depending on the level of privacy. Furthermore, in a cafeteria, we would match our speaking volume to the background noise. We therefore change our speech in an interaction with the surroundings. Overall, we change both the manner of speaking and its content, in an interaction with our environment. Our research team is interested in the question of how such notions of privacy should be taken into account in the design of speech interfaces, such as Alexa/Amazon, Siri/Apple, Google and Mycroft. We believe that in the design of good user-interfaces, you should strive for technology which is intuitive to use. If your speech assistant handles privacy in a similar way as a natural person does, then most likely it would feel natural to the user. A key concept for us is modelling the users’ experience of privacy. Technology should understand our feelings towards privacy, how we experience it and act accordingly. From the myData-perspective, this means that all (speech) data is about interactions, between two or more parties. Ownership of such data is then also shared among the participating parties. There is no singular owner of data, but access and management of data must always happen in mutual agreement. In fact, the same applies to many other media as well. It is obvious that chatting on WhatsApp is a shared experience. Interesting (=good) photographs are those which entail a story; "This is when we went to the beach with Sophie." The myData concept should be adapted to take into account such frequently appearing real-life data. In our view, data becomes more interesting when it is about an interaction. In other words, since interaction is so central to our understanding of the world, it should then also be reflected in our data representations. To include the most significant data, we should turn our attention from myData to focus on ourData. Here, the importance of data is then dependent on, and even defined by, with whom are you talking?
On the use of stray wireless signals for sensing: a look beyond 5G for the next generation industry.
Savazzi, S.; Sigg, S.; Vicentini, F.; Kianoush, S.; and Findling, R.
IEEE Computer, SI on on Transformative Computing and Communication, 52(7): 25-36. 2019.
doi
link
bibtex
abstract
@article{Savazzi_2019_transformative,
author={Stefano Savazzi and Stephan Sigg and Federico Vicentini and Sanaz Kianoush and Rainhard Findling},
journal={IEEE Computer, SI on on Transformative Computing and Communication},
title={On the use of stray wireless signals for sensing: a look beyond 5G for the next generation industry},
year={2019},
number = {7},
pages = {25-36},
volume = {52},
doi = {10.1109/MC.2019.2913626},
abstract = {Transformative techniques to capture and process wireless stray radiation originated from different radio sources are gaining increasing attention. They can be applied to human sensing, behavior recognition, localization and mapping. The omnipresent radio-frequency (RF) stray radiation of wireless devices (WiFi, Cellular or any Personal/Body Area Network) encodes a 3D view of all objects traversed by its propagation. A trained machine learning model is then applied to features extracted in real-time from radio signals to isolate body-induced footprints or environmental alterations. The technology can augment and transform existing radio-devices into ubiquitously distributed sensors that simultaneously act as wireless transmitters and receivers (e.g. fast time-multiplexed). Thereby, 5G-empowered tiny device networks transform into a dense web of RF-imaging links that extract a view of an environment, for instance, to monitor manufacturing processes in next generation industrial set-ups (Industry 4.0, I4.0). This article highlights emerging transformative computing tools for radio sensing, promotes key technology enablers in 5G communication and reports deployment experiences.},
project = {radiosense},
group = {ambience}}
Transformative techniques to capture and process wireless stray radiation originated from different radio sources are gaining increasing attention. They can be applied to human sensing, behavior recognition, localization and mapping. The omnipresent radio-frequency (RF) stray radiation of wireless devices (WiFi, Cellular or any Personal/Body Area Network) encodes a 3D view of all objects traversed by its propagation. A trained machine learning model is then applied to features extracted in real-time from radio signals to isolate body-induced footprints or environmental alterations. The technology can augment and transform existing radio-devices into ubiquitously distributed sensors that simultaneously act as wireless transmitters and receivers (e.g. fast time-multiplexed). Thereby, 5G-empowered tiny device networks transform into a dense web of RF-imaging links that extract a view of an environment, for instance, to monitor manufacturing processes in next generation industrial set-ups (Industry 4.0, I4.0). This article highlights emerging transformative computing tools for radio sensing, promotes key technology enablers in 5G communication and reports deployment experiences.
CORMORANT: Ubiquitous Risk-Aware Multi-Modal Biometric Authentication Across Mobile Devices.
Hintze, D.; Füller, M.; Scholz, S.; Findling, R.; Muaaz, M.; Kapfer, P.; and Mayrhofer, E. K. R.
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT). September 2019.
doi
link
bibtex
abstract
@article{Hintze_2019_Ubicomp,
author = {Daniel Hintze and Matthias F\"uller and Sebastian Scholz and Rainhard Findling and Muhammad Muaaz and Philipp Kapfer and Eckhard Kochand Ren\'{e} Mayrhofer},
journal = {Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)},
title = {CORMORANT: Ubiquitous Risk-Aware Multi-Modal Biometric Authentication Across Mobile Devices},
year = {2019},
month = sep,
abstract = {People own and carry an increasing number of ubiquitous mobile devices, such as smartphones, tablets, and notebooks. Being
small and mobile, those devices have a high propensity to become lost or stolen. Since mobile devices provide access to their
owners’ digital lives, strong authentication is vital to protect sensitive information and services against unauthorized access.
However, at least one in three devices is unprotected, with inconvenience of traditional authentication being the paramount
reason. We present the concept of CORMORANT , an approach to significantly reduce the manual burden of mobile user
verification through risk-aware, multi-modal biometric, cross-device authentication. Transparent behavioral and physiological
biometrics like gait, voice, face, and keystroke dynamics are used to continuously evaluate the user’s identity without explicit
interaction. The required level of confidence in the user’s identity is dynamically adjusted based on the risk of unauthorized
access derived from signals like location, time of day and nearby devices. Authentication results are shared securely with
trusted devices to facilitate cross-device authentication for co-located devices. Conducting a large-scale agent-based simulation
of 4 000 users based on more than 720 000 days of real-world device usage traces and 6.7 million simulated robberies and thefts
sourced from police reports, we found the proposed approach is able to reduce the frequency of password entries required on
smartphones by 97.82% whilst simultaneously reducing the risk of unauthorized access in the event of a crime by 97.72%,
compared to conventional knowledge-based authentication.
},
doi={10.1145/2800835.2800906},
group = {ambience}}
People own and carry an increasing number of ubiquitous mobile devices, such as smartphones, tablets, and notebooks. Being small and mobile, those devices have a high propensity to become lost or stolen. Since mobile devices provide access to their owners’ digital lives, strong authentication is vital to protect sensitive information and services against unauthorized access. However, at least one in three devices is unprotected, with inconvenience of traditional authentication being the paramount reason. We present the concept of CORMORANT , an approach to significantly reduce the manual burden of mobile user verification through risk-aware, multi-modal biometric, cross-device authentication. Transparent behavioral and physiological biometrics like gait, voice, face, and keystroke dynamics are used to continuously evaluate the user’s identity without explicit interaction. The required level of confidence in the user’s identity is dynamically adjusted based on the risk of unauthorized access derived from signals like location, time of day and nearby devices. Authentication results are shared securely with trusted devices to facilitate cross-device authentication for co-located devices. Conducting a large-scale agent-based simulation of 4 000 users based on more than 720 000 days of real-world device usage traces and 6.7 million simulated robberies and thefts sourced from police reports, we found the proposed approach is able to reduce the frequency of password entries required on smartphones by 97.82% whilst simultaneously reducing the risk of unauthorized access in the event of a crime by 97.72%, compared to conventional knowledge-based authentication.
Closed-Eye Gaze Gestures: Detection and Recognition of Closed-Eye Movements with Cameras in Smart Glasses.
Findling, R. D.; Nguyen, L. N.; and Sigg, S.
In
International Work-Conference on Artificial Neural Networks, 2019.
paper
doi
link
bibtex
abstract
6 downloads
@InProceedings{Rainhard_2019_iwann,
author={Rainhard Dieter Findling and Le Ngu Nguyen and Stephan Sigg},
title={Closed-Eye Gaze Gestures: Detection and Recognition of Closed-Eye Movements with Cameras in Smart Glasses},
booktitle={International Work-Conference on Artificial Neural Networks},
year={2019},
doi = {10.1007/978-3-030-20521-8_27},
abstract ={Gaze gestures bear potential for user input with mobile devices, especially smart glasses, due to being always available and hands-free. So far, gaze gesture recognition approaches have utilized open-eye movements only and disregarded closed-eye movements. This paper is a first investigation of the feasibility of detecting and recognizing closed-eye gaze gestures from close-up optical sources, e.g. eye-facing cameras embedded in smart glasses. We propose four different closed-eye gaze gesture protocols, which extend the alphabet of existing open-eye gaze gesture approaches. We further propose a methodology for detecting and extracting the corresponding closed-eye movements with full optical flow, time series processing, and machine learning. In the evaluation of the four protocols we find closed-eye gaze gestures to be detected 82.8%-91.6% of the time, and extracted gestures to be recognized correctly with an accuracy of 92.9%-99.2%.},
url_Paper = {http://ambientintelligence.aalto.fi/findling/pdfs/publications/Findling_19_ClosedEyeGaze.pdf},
project = {hidemygaze},
group = {ambience}}
Gaze gestures bear potential for user input with mobile devices, especially smart glasses, due to being always available and hands-free. So far, gaze gesture recognition approaches have utilized open-eye movements only and disregarded closed-eye movements. This paper is a first investigation of the feasibility of detecting and recognizing closed-eye gaze gestures from close-up optical sources, e.g. eye-facing cameras embedded in smart glasses. We propose four different closed-eye gaze gesture protocols, which extend the alphabet of existing open-eye gaze gesture approaches. We further propose a methodology for detecting and extracting the corresponding closed-eye movements with full optical flow, time series processing, and machine learning. In the evaluation of the four protocols we find closed-eye gaze gestures to be detected 82.8%-91.6% of the time, and extracted gestures to be recognized correctly with an accuracy of 92.9%-99.2%.
Workout Type Recognition and Repetition Counting with CNNs from 3D Acceleration Sensed on the Chest.
Skawinski, K.; Roca, F. M.; Findling, R. D.; and Sigg, S.
In
International Work-Conference on Artificial Neural Networks, volume 11506, of
LNCS, pages 347–359, June 2019.
paper
doi
link
bibtex
abstract
@InProceedings{Ferran_2019_iwann,
author={Kacper Skawinski and Ferran Montraveta Roca and Rainhard Dieter Findling and Stephan Sigg},
title={Workout Type Recognition and Repetition Counting with CNNs from 3D Acceleration Sensed on the Chest},
booktitle={International Work-Conference on Artificial Neural Networks},
year={2019},
doi = {10.1007/978-3-030-20521-8_29},
volume = {11506},
series = {LNCS},
pages = {347--359},
month = jun,
abstract = {Sports and workout activities have become important parts of modern life. Nowadays, many people track characteristics about their sport activities with their mobile devices, which feature inertial measurement unit (IMU) sensors. In this paper we present a methodology to detect and recognize workout, as well as to count repetitions done in a recognized type of workout, from a single 3D accelerometer worn at the chest. We consider four different types of workout (pushups, situps, squats and jumping jacks). Our technical approach to workout type recognition and repetition counting is based on machine learning with a convolutional neural network. Our evaluation utilizes data of 10 subjects, which wear a Movesense sensors on their chest during their workout. We thereby find that workouts are recognized correctly on average 89.9% of the time, and the workout repetition counting yields an average detection accuracy of 97.9% over all types of workout.},
url_Paper = {http://ambientintelligence.aalto.fi/findling/pdfs/publications/Skawinski_19_WorkoutTypeRecognition.pdf},
group = {ambience}}
Sports and workout activities have become important parts of modern life. Nowadays, many people track characteristics about their sport activities with their mobile devices, which feature inertial measurement unit (IMU) sensors. In this paper we present a methodology to detect and recognize workout, as well as to count repetitions done in a recognized type of workout, from a single 3D accelerometer worn at the chest. We consider four different types of workout (pushups, situps, squats and jumping jacks). Our technical approach to workout type recognition and repetition counting is based on machine learning with a convolutional neural network. Our evaluation utilizes data of 10 subjects, which wear a Movesense sensors on their chest during their workout. We thereby find that workouts are recognized correctly on average 89.9% of the time, and the workout repetition counting yields an average detection accuracy of 97.9% over all types of workout.
Mobile Brainwaves: On the Interchangeability of Simple Authentication Tasks with Low-Cost, Single-Electrode EEG Devices.
Haukipuro, E.; Kolehmainen, V.; Myllarinen, J.; Remander, S.; Salo, J. T.; Takko, T.; Nguyen, L. N.; Sigg, S.; and Findling, R.
IEICE Transactions, Special issue on Sensing, Wireless Networking, Data Collection, Analysis and Processing Technologies for Ambient Intelligence with Internet of Things. 2019.
paper
doi
link
bibtex
abstract
@article{Haukipuro_2019_IEICE,
author={Eeva-Sofia Haukipuro and Ville Kolehmainen and Janne Myllarinen and Sebastian Remander and Janne T. Salo and Tuomas Takko and Le Ngu Nguyen and Stephan Sigg and Rainhard Findling},
journal={IEICE Transactions, Special issue on Sensing, Wireless Networking, Data Collection, Analysis and Processing Technologies for Ambient Intelligence with Internet of Things},
title={Mobile Brainwaves: On the Interchangeability of Simple Authentication Tasks with Low-Cost, Single-Electrode EEG Devices},
year={2019},
url_Paper = {http://ambientintelligence.aalto.fi/findling/pdfs/publications/Haukipuro_19_MobileBrainwavesInterchangeability.pdf},
abstract = {Electroencephalography (EEG) for biometric authentication has received some attention in recent years. In this paper, we explore the effect of three simple EEG related authentication tasks, namely resting, thinking about a picture, and moving a single finger, on mobile, low-cost, single electrode based EEG authentication. We present details of our authentication pipeline, including extracting features from the frequency power spectrum and MFCC, and training a multilayer perceptron classifier for authentication. For our evaluation we record an EEG dataset of 27 test subjects. We use a baseline, task-agnostic, and task-specific evaluation setup to investigate if different tasks can be used in place of each other for authentication. We further evaluate if tasks themselves can be told apart from each other. Evaluation results suggest that tasks differ, hence to some extent are distinguishable, as well as that our authentication approach can work in a task-specific as well as in a task-agnostic manner.},
doi = {10.1587/transcom.2018SEP0016},
group = {ambience}
}
%%% 2018 %%%
Electroencephalography (EEG) for biometric authentication has received some attention in recent years. In this paper, we explore the effect of three simple EEG related authentication tasks, namely resting, thinking about a picture, and moving a single finger, on mobile, low-cost, single electrode based EEG authentication. We present details of our authentication pipeline, including extracting features from the frequency power spectrum and MFCC, and training a multilayer perceptron classifier for authentication. For our evaluation we record an EEG dataset of 27 test subjects. We use a baseline, task-agnostic, and task-specific evaluation setup to investigate if different tasks can be used in place of each other for authentication. We further evaluate if tasks themselves can be told apart from each other. Evaluation results suggest that tasks differ, hence to some extent are distinguishable, as well as that our authentication approach can work in a task-specific as well as in a task-agnostic manner.