Repository logo
 
Publication

Higher order feature extraction and selection for robust human gesture recognition using CSI of COTS Wi-Fi devices

dc.contributor.authorAhmed, Hasmath Farhana
dc.contributor.authorAhmad, Hafisoh
dc.contributor.authorPhang, Swee King
dc.contributor.authorVaithilingam, Chockalingam
dc.contributor.authorHarkat, Houda
dc.contributor.authorNarasingamurthi, Kulasekharan
dc.date.accessioned2019-08-26T12:31:49Z
dc.date.available2019-08-26T12:31:49Z
dc.date.issued2019-07-04
dc.description.abstractDevice-free human gesture recognition (HGR) using commercial o the shelf (COTS) Wi-Fi devices has gained attention with recent advances in wireless technology. HGR recognizes the human activity performed, by capturing the reflections ofWi-Fi signals from moving humans and storing them as raw channel state information (CSI) traces. Existing work on HGR applies noise reduction and transformation to pre-process the raw CSI traces. However, these methods fail to capture the non-Gaussian information in the raw CSI data due to its limitation to deal with linear signal representation alone. The proposed higher order statistics-based recognition (HOS-Re) model extracts higher order statistical (HOS) features from raw CSI traces and selects a robust feature subset for the recognition task. HOS-Re addresses the limitations in the existing methods, by extracting third order cumulant features that maximizes the recognition accuracy. Subsequently, feature selection methods derived from information theory construct a robust and highly informative feature subset, fed as input to the multilevel support vector machine (SVM) classifier in order to measure the performance. The proposed methodology is validated using a public database SignFi, consisting of 276 gestures with 8280 gesture instances, out of which 5520 are from the laboratory and 2760 from the home environment using a 10 5 cross-validation. HOS-Re achieved an average recognition accuracy of 97.84%, 98.26% and 96.34% for the lab, home and lab + home environment respectively. The average recognition accuracy for 150 sign gestures with 7500 instances, collected from five di erent users was 96.23% in the laboratory environment.pt_PT
dc.description.sponsorshipTaylor's University through its TAYLOR'S PhD SCHOLARSHIP Programmept_PT
dc.description.versioninfo:eu-repo/semantics/publishedVersionpt_PT
dc.identifier.doi10.3390/s19132959pt_PT
dc.identifier.issn1424-8220
dc.identifier.urihttp://hdl.handle.net/10400.1/12737
dc.language.isoengpt_PT
dc.publisherMDPIpt_PT
dc.subjectGesture recognitionpt_PT
dc.subjectCSIpt_PT
dc.subjectWi-Fipt_PT
dc.subjectHOSpt_PT
dc.subjectCumulantspt_PT
dc.subjectMutual informationpt_PT
dc.subjectSVMpt_PT
dc.titleHigher order feature extraction and selection for robust human gesture recognition using CSI of COTS Wi-Fi devicespt_PT
dc.typejournal article
dspace.entity.typePublication
oaire.citation.titleSensorspt_PT
oaire.citation.volume19pt_PT
person.familyNamePhang
person.familyNameHarkat
person.familyNameNarasingamurthi
person.givenNameSwee King
person.givenNameHouda
person.givenNameKulasekharan
person.identifier.orcid0000-0002-7877-8766
person.identifier.orcid0000-0002-7827-1527
person.identifier.orcid0000-0001-7919-7229
person.identifier.ridJ-3431-2018
person.identifier.scopus-author-id36519608700
person.identifier.scopus-author-id15058983200
rcaap.rightsopenAccesspt_PT
rcaap.typearticlept_PT
relation.isAuthorOfPublication88ce0612-d283-4059-b755-da60813caf62
relation.isAuthorOfPublicationff3a322c-945f-465a-b746-c69eab18be72
relation.isAuthorOfPublicationdc5264bc-372b-451e-bf63-e4069fe061e8
relation.isAuthorOfPublication.latestForDiscoverydc5264bc-372b-451e-bf63-e4069fe061e8

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Higher Order Feature Extraction and Selection for Robust Human Gesture Recognition using CSI of COTS Wi-Fi Devices .pdf
Size:
5.04 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
3.46 KB
Format:
Item-specific license agreed upon to submission
Description: