ass. Janez Križaj, PhD
post-doc
Laboratory of Artificial Perception, Systems and Cybernetics (LUKS)
Faculty of Electrical Engineering, University of Ljubljana
Tržaška cesta 25
Si-1000 Ljubljana, Slovenia
Tel.: +386 1 4768 839
Fax.: +386 1 4768 316
E-mail: janez(dot)krizaj(at)fe(dot).uni-lj(dot)si
Education
BSc Degree (2009) – University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
PhD degree (2013) – University of Ljubljana, Faculty of Electrical Engineering, Ljubljana, Slovenia
Research interests
- computer vision and image processing
- pattern recognition
- biometrics
- 3D face recognition
Membership
- International Association of Pattern Recognition – IAPR
- The Slovene Pattern Recognition Society
- …
Biosketch
Janez received his B.Sc. degree in 2008 and his PhD degree in 2013, both from the Faculty of Electrical Engineering of the University of Ljubljana. In 2008 he became a Junior Researcher at the Laboratory of Artificial Perception, Systems and Cybernetics where he focused on the development of 3D face recognition systems. From 2014 to mid-2015 he worked as a researcher on his post-doctoral project 3D-For-REAL (3D face recognition in real world conditions). Currently, he is working on his second post-doctoral project, partially funded by the Slovenian Research Agency. Over the last few years he collaborated on several national as well as international projects, among them two EU FP7 projects, namely RESPECT and SMART. Dr. Križaj is a member of the Slovenian Pattern Recognition Society and a member of the International Pattern Recognition Association (IAPR). His research interests include biometrics, face recognition, pattern recognition and image processing. Janez has published papers at various peer-reviewed journals and leading international conferences, such as the International Joint Conference on Biometrics (IJCB) or the International Conference on Automatic Face and Gesture Recognition (AFGR). Janez has two patent applications pending.
Recent publications
Janez Križaj; Simon Dobrišek; France Mihelič; Vitomir Štruc Facial Landmark Localization from 3D Images Inproceedings In: Proceedings of the Electrotechnical and Computer Science Conference (ERK), Portorož, Slovenia, 2016. @inproceedings{ERK2016Janez, title = {Facial Landmark Localization from 3D Images}, author = {Janez Kri\v{z}aj and Simon Dobri\v{s}ek and France Miheli\v{c} and Vitomir \v{S}truc}, year = {2016}, date = {2016-09-20}, booktitle = {Proceedings of the Electrotechnical and Computer Science Conference (ERK)}, address = {Portoro\v{z}, Slovenia}, abstract = {A novel method for automatic facial landmark localization is presented. The method builds on the supervised descent framework, which was shown to successfully localize landmarks in the presence of large expression variations and mild occlusions, but struggles when localizing landmarks on faces with large pose variations. We propose an extension of the supervised descent framework which trains multiple descent maps and results in increased robustness to pose variations. The performance of the proposed method is demonstrated on the Bosphorus database for the problem of facial landmark localization from 3D data. Our experimental results show that the proposed method exhibits increased robustness to pose variations, while retaining high performance in the case of expression and occlusion variations.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } A novel method for automatic facial landmark localization is presented. The method builds on the supervised descent framework, which was shown to successfully localize landmarks in the presence of large expression variations and mild occlusions, but struggles when localizing landmarks on faces with large pose variations. We propose an extension of the supervised descent framework which trains multiple descent maps and results in increased robustness to pose variations. The performance of the proposed method is demonstrated on the Bosphorus database for the problem of facial landmark localization from 3D data. Our experimental results show that the proposed method exhibits increased robustness to pose variations, while retaining high performance in the case of expression and occlusion variations. |
Vitomir Štruc; Janez Križaj; Simon Dobrišek Modest face recognition Conference Proceedings of the International Workshop on Biometrics and Forensics (IWBF), IEEE, 2015. @conference{struc2015modest, title = {Modest face recognition}, author = { Vitomir \v{S}truc and Janez Kri\v{z}aj and Simon Dobri\v{s}ek}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/IWBF2015.pdf}, year = {2015}, date = {2015-01-01}, booktitle = {Proceedings of the International Workshop on Biometrics and Forensics (IWBF)}, pages = {1--6}, publisher = {IEEE}, abstract = {The facial imagery usually at the disposal for forensics investigations is commonly of a poor quality due to the unconstrained settings in which it was acquired. The captured faces are typically non-frontal, partially occluded and of a low resolution, which makes the recognition task extremely difficult. In this paper we try to address this problem by presenting a novel framework for face recognition that combines diverse features sets (Gabor features, local binary patterns, local phase quantization features and pixel intensities), probabilistic linear discriminant analysis (PLDA) and data fusion based on linear logistic regression. With the proposed framework a matching score for the given pair of probe and target images is produced by applying PLDA on each of the four feature sets independently - producing a (partial) matching score for each of the PLDA-based feature vectors - and then combining the partial matching results at the score level to generate a single matching score for recognition. We make two main contributions in the paper: i) we introduce a novel framework for face recognition that relies on probabilistic MOdels of Diverse fEature SeTs (MODEST) to facilitate the recognition process and ii) benchmark it against the existing state-of-the-art. We demonstrate the feasibility of our MODEST framework on the FRGCv2 and PaSC databases and present comparative results with the state-of-the-art recognition techniques, which demonstrate the efficacy of our framework.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } The facial imagery usually at the disposal for forensics investigations is commonly of a poor quality due to the unconstrained settings in which it was acquired. The captured faces are typically non-frontal, partially occluded and of a low resolution, which makes the recognition task extremely difficult. In this paper we try to address this problem by presenting a novel framework for face recognition that combines diverse features sets (Gabor features, local binary patterns, local phase quantization features and pixel intensities), probabilistic linear discriminant analysis (PLDA) and data fusion based on linear logistic regression. With the proposed framework a matching score for the given pair of probe and target images is produced by applying PLDA on each of the four feature sets independently - producing a (partial) matching score for each of the PLDA-based feature vectors - and then combining the partial matching results at the score level to generate a single matching score for recognition. We make two main contributions in the paper: i) we introduce a novel framework for face recognition that relies on probabilistic MOdels of Diverse fEature SeTs (MODEST) to facilitate the recognition process and ii) benchmark it against the existing state-of-the-art. We demonstrate the feasibility of our MODEST framework on the FRGCv2 and PaSC databases and present comparative results with the state-of-the-art recognition techniques, which demonstrate the efficacy of our framework. |
Ross Beveridge; Hao Zhang; Bruce A Draper; Patrick J Flynn; Zhenhua Feng; Patrik Huber; Josef Kittler; Zhiwu Huang; Shaoxin Li; Yan Li; Vitomir Štruc; Janez Križaj; others Report on the FG 2015 video person recognition evaluation Conference 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (IEEE FG), 1 , IEEE 2015. @conference{beveridge2015report, title = {Report on the FG 2015 video person recognition evaluation}, author = {Ross Beveridge and Hao Zhang and Bruce A Draper and Patrick J Flynn and Zhenhua Feng and Patrik Huber and Josef Kittler and Zhiwu Huang and Shaoxin Li and Yan Li and Vitomir \v{S}truc and Janez Kri\v{z}aj and others}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/fg2015videoEvalPreprint.pdf}, year = {2015}, date = {2015-01-01}, booktitle = {11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (IEEE FG)}, volume = {1}, pages = {1--8}, organization = {IEEE}, abstract = {This report presents results from the Video Person Recognition Evaluation held in conjunction with the 11th IEEE International Conference on Automatic Face and Gesture Recognition. Two experiments required algorithms to recognize people in videos from the Point-and-Shoot Face Recognition Challenge Problem (PaSC). The first consisted of videos from a tripod mounted high quality video camera. The second contained videos acquired from 5 different handheld video cameras. There were 1401 videos in each experiment of 265 subjects. The subjects, the scenes, and the actions carried out by the people are the same in both experiments. Five groups from around the world participated in the evaluation. The video handheld experiment was included in the International Joint Conference on Biometrics (IJCB) 2014 Handheld Video Face and Person Recognition Competition. The top verification rate from this evaluation is double that of the top performer in the IJCB competition. Analysis shows that the factor most effecting algorithm performance is the combination of location and action: where the video was acquired and what the person was doing.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } This report presents results from the Video Person Recognition Evaluation held in conjunction with the 11th IEEE International Conference on Automatic Face and Gesture Recognition. Two experiments required algorithms to recognize people in videos from the Point-and-Shoot Face Recognition Challenge Problem (PaSC). The first consisted of videos from a tripod mounted high quality video camera. The second contained videos acquired from 5 different handheld video cameras. There were 1401 videos in each experiment of 265 subjects. The subjects, the scenes, and the actions carried out by the people are the same in both experiments. Five groups from around the world participated in the evaluation. The video handheld experiment was included in the International Joint Conference on Biometrics (IJCB) 2014 Handheld Video Face and Person Recognition Competition. The top verification rate from this evaluation is double that of the top performer in the IJCB competition. Analysis shows that the factor most effecting algorithm performance is the combination of location and action: where the video was acquired and what the person was doing. |
Simon Dobrišek; Vitomir Štruc; Janez Križaj; France Mihelič Face recognition in the wild with the Probabilistic Gabor-Fisher Classifier Conference 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (IEEE FG): BWild 2015, 2 , IEEE 2015. @conference{dobrivsek2015face, title = {Face recognition in the wild with the Probabilistic Gabor-Fisher Classifier}, author = { Simon Dobri\v{s}ek and Vitomir \v{S}truc and Janez Kri\v{z}aj and France Miheli\v{c}}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/Bwild2015.pdf}, year = {2015}, date = {2015-01-01}, booktitle = {11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (IEEE FG): BWild 2015}, volume = {2}, pages = {1--6}, organization = {IEEE}, abstract = {The paper addresses the problem of face recognition in the wild. It introduces a novel approach to unconstrained face recognition that exploits Gabor magnitude features and a simplified version of the probabilistic linear discriminant analysis (PLDA). The novel approach, named Probabilistic Gabor-Fisher Classifier (PGFC), first extracts a vector of Gabor magnitude features from the given input image using a battery of Gabor filters, then reduces the dimensionality of the extracted feature vector by projecting it into a low-dimensional subspace and finally produces a representation suitable for identity inference by applying PLDA to the projected feature vector. The proposed approach extends the popular Gabor-Fisher Classifier (GFC) to a probabilistic setting and thus improves on the generalization capabilities of the GFC method. The PGFC technique is assessed in face verification experiments on the Point and Shoot Face Recognition Challenge (PaSC) database, which features real-world videos of subjects performing everyday tasks. Experimental results on this challenging database show the feasibility of the proposed approach, which improves on the best results on this database reported in the literature by the time of writing.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } The paper addresses the problem of face recognition in the wild. It introduces a novel approach to unconstrained face recognition that exploits Gabor magnitude features and a simplified version of the probabilistic linear discriminant analysis (PLDA). The novel approach, named Probabilistic Gabor-Fisher Classifier (PGFC), first extracts a vector of Gabor magnitude features from the given input image using a battery of Gabor filters, then reduces the dimensionality of the extracted feature vector by projecting it into a low-dimensional subspace and finally produces a representation suitable for identity inference by applying PLDA to the projected feature vector. The proposed approach extends the popular Gabor-Fisher Classifier (GFC) to a probabilistic setting and thus improves on the generalization capabilities of the GFC method. The PGFC technique is assessed in face verification experiments on the Point and Shoot Face Recognition Challenge (PaSC) database, which features real-world videos of subjects performing everyday tasks. Experimental results on this challenging database show the feasibility of the proposed approach, which improves on the best results on this database reported in the literature by the time of writing. |
Janez Križaj; Vitomir Štruc; France Mihelič A Feasibility Study on the Use of Binary Keypoint Descriptors for 3D Face Recognition Conference Proceedings of the Mexican Conference on Pattern Recognition (MCPR), Springer 2014. @conference{krivzaj2014feasibility, title = {A Feasibility Study on the Use of Binary Keypoint Descriptors for 3D Face Recognition}, author = { Janez Kri\v{z}aj and Vitomir \v{S}truc and France Miheli\v{c}}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/MCPR2014.pdf}, year = {2014}, date = {2014-01-01}, booktitle = {Proceedings of the Mexican Conference on Pattern Recognition (MCPR)}, pages = {142--151}, organization = {Springer}, abstract = {Despite the progress made in the area of local image descriptors in recent years, virtually no literature is available on the use of more recent descriptors for the problem of 3D face recognition, such as BRIEF, ORB, BRISK or FREAK, which are binary in nature and, therefore, tend to be faster to compute and match, while requiring signicantly less memory for storage than, for example, SIFT or SURF. In this paper, we try to close this gap and present a feasibility study on the use of these descriptors for 3D face recognition. Descriptors are evaluated on the three challenging 3D face image datasets, namely, the FRGC, UMB and CASIA. Our experiments show the binary descriptors ensure slightly lower verication rates than SIFT, comparable to those of the SURF descriptor, while being an order of magnitude faster than SIFT. The results suggest that the use of binary descriptors represents a viable alternative to the established descriptors.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } Despite the progress made in the area of local image descriptors in recent years, virtually no literature is available on the use of more recent descriptors for the problem of 3D face recognition, such as BRIEF, ORB, BRISK or FREAK, which are binary in nature and, therefore, tend to be faster to compute and match, while requiring signicantly less memory for storage than, for example, SIFT or SURF. In this paper, we try to close this gap and present a feasibility study on the use of these descriptors for 3D face recognition. Descriptors are evaluated on the three challenging 3D face image datasets, namely, the FRGC, UMB and CASIA. Our experiments show the binary descriptors ensure slightly lower verication rates than SIFT, comparable to those of the SURF descriptor, while being an order of magnitude faster than SIFT. The results suggest that the use of binary descriptors represents a viable alternative to the established descriptors. |
Janez Križaj; Vitomir Štruc; Simon Dobrišek; Darijan Marčetić; Slobodan Ribarić SIFT vs. FREAK: Assessing the usefulness of two keypoint descriptors for 3D face verification Inproceedings In: 37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO) , pp. 1336–1341, Mipro Opatija, Croatia, 2014. @inproceedings{krivzaj2014sift, title = {SIFT vs. FREAK: Assessing the usefulness of two keypoint descriptors for 3D face verification}, author = { Janez Kri\v{z}aj and Vitomir \v{S}truc and Simon Dobri\v{s}ek and Darijan Mar\v{c}eti\'{c} and Slobodan Ribari\'{c}}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/MIPRO2014a.pdf}, year = {2014}, date = {2014-01-01}, booktitle = {37th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO) }, pages = {1336--1341}, address = {Opatija, Croatia}, organization = {Mipro}, abstract = {Many techniques in the area of 3D face recognition rely on local descriptors to characterize the surface-shape information around points of interest (or keypoints) in the 3D images. Despite the fact that a lot of advancements have been made in the area of keypoint descriptors over the last years, the literature on 3D-face recognition for the most part still focuses on established descriptors, such as SIFT and SURF, and largely neglects more recent descriptors, such as the FREAK descriptor. In this paper we try to bridge this gap and assess the usefulness of the FREAK descriptor for the task for 3D face recognition. Of particular interest to us is a direct comparison of the FREAK and SIFT descriptors within a simple verification framework. To evaluate our framework with the two descriptors, we conduct 3D face recognition experiments on the challenging FRGCv2 and UMBDB databases and show that the FREAK descriptor ensures a very competitive verification performance when compared to the SIFT descriptor, but at a fraction of the computational cost. Our results indicate that the FREAK descriptor is a viable alternative to the SIFT descriptor for the problem of 3D face verification and due to its binary nature is particularly useful for real-time recognition systems and verification techniques for low-resource devices such as mobile phones, tablets and alike.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Many techniques in the area of 3D face recognition rely on local descriptors to characterize the surface-shape information around points of interest (or keypoints) in the 3D images. Despite the fact that a lot of advancements have been made in the area of keypoint descriptors over the last years, the literature on 3D-face recognition for the most part still focuses on established descriptors, such as SIFT and SURF, and largely neglects more recent descriptors, such as the FREAK descriptor. In this paper we try to bridge this gap and assess the usefulness of the FREAK descriptor for the task for 3D face recognition. Of particular interest to us is a direct comparison of the FREAK and SIFT descriptors within a simple verification framework. To evaluate our framework with the two descriptors, we conduct 3D face recognition experiments on the challenging FRGCv2 and UMBDB databases and show that the FREAK descriptor ensures a very competitive verification performance when compared to the SIFT descriptor, but at a fraction of the computational cost. Our results indicate that the FREAK descriptor is a viable alternative to the SIFT descriptor for the problem of 3D face verification and due to its binary nature is particularly useful for real-time recognition systems and verification techniques for low-resource devices such as mobile phones, tablets and alike. |
Ross Beveridge; Hao Zhang; Patrick Flynn; Yooyoung Lee; Venice Erin Liong; Jiwen Lu; Marcus de Assis Angeloni; Tiago de Freitas Pereira; Haoxiang Li; Gang Hua; Vitomir Štruc; Janez Križaj; Jonathon Phillips The ijcb 2014 pasc video face and person recognition competition Conference Proceedings of the IEEE International Joint Conference on Biometrics (IJCB), IEEE 2014. @conference{beveridge2014ijcb, title = {The ijcb 2014 pasc video face and person recognition competition}, author = {Ross Beveridge and Hao Zhang and Patrick Flynn and Yooyoung Lee and Venice Erin Liong and Jiwen Lu and Marcus de Assis Angeloni and Tiago de Freitas Pereira and Haoxiang Li and Gang Hua and Vitomir \v{S}truc and Janez Kri\v{z}aj and Jonathon Phillips}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/IJCB2014.pdf}, year = {2014}, date = {2014-01-01}, booktitle = {Proceedings of the IEEE International Joint Conference on Biometrics (IJCB)}, pages = {1--8}, organization = {IEEE}, abstract = {The Point-and-Shoot Face Recognition Challenge (PaSC) is a performance evaluation challenge including 1401 videos of 265 people acquired with handheld cameras and depicting people engaged in activities with non-frontal head pose. This report summarizes the results from a competition using this challenge problem. In the Video-to-video Experiment a person in a query video is recognized by comparing the query video to a set of target videos. Both target and query videos are drawn from the same pool of 1401 videos. In the Still-to-video Experiment the person in a query video is to be recognized by comparing the query video to a larger target set consisting of still images. Algorithm performance is characterized by verification rate at a false accept rate of 0:01 and associated receiver operating characteristic (ROC) curves. Participants were provided eye coordinates for video frames. Results were submitted by 4 institutions: (i) Advanced Digital Science Center, Singapore; (ii) CPqD, Brasil; (iii) Stevens Institute of Technology, USA; and (iv) University of Ljubljana, Slovenia. Most competitors demonstrated video face recognition performance superior to the baseline provided with PaSC. The results represent the best performance to date on the handheld video portion of the PaSC.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } The Point-and-Shoot Face Recognition Challenge (PaSC) is a performance evaluation challenge including 1401 videos of 265 people acquired with handheld cameras and depicting people engaged in activities with non-frontal head pose. This report summarizes the results from a competition using this challenge problem. In the Video-to-video Experiment a person in a query video is recognized by comparing the query video to a set of target videos. Both target and query videos are drawn from the same pool of 1401 videos. In the Still-to-video Experiment the person in a query video is to be recognized by comparing the query video to a larger target set consisting of still images. Algorithm performance is characterized by verification rate at a false accept rate of 0:01 and associated receiver operating characteristic (ROC) curves. Participants were provided eye coordinates for video frames. Results were submitted by 4 institutions: (i) Advanced Digital Science Center, Singapore; (ii) CPqD, Brasil; (iii) Stevens Institute of Technology, USA; and (iv) University of Ljubljana, Slovenia. Most competitors demonstrated video face recognition performance superior to the baseline provided with PaSC. The results represent the best performance to date on the handheld video portion of the PaSC. |
Janez Križaj; Vitomir Štruc; Simon Dobrišek Combining 3D face representations using region covariance descriptors and statistical models Conference Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition and Workshops (IEEE FG), Workshop on 3D Face Biometrics, IEEE, Shanghai, China, 2013. @conference{FG2013, title = {Combining 3D face representations using region covariance descriptors and statistical models}, author = {Janez Kri\v{z}aj and Vitomir \v{S}truc and Simon Dobri\v{s}ek}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/FG2013.pdf}, year = {2013}, date = {2013-05-01}, booktitle = {Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition and Workshops (IEEE FG), Workshop on 3D Face Biometrics}, publisher = {IEEE}, address = {Shanghai, China}, abstract = {The paper introduces a novel framework for 3D face recognition that capitalizes on region covariance descriptors and Gaussian mixture models. The framework presents an elegant and coherent way of combining multiple facial representations, while simultaneously examining all computed representations at various levels of locality. The framework first computes a number of region covariance matrices/descriptors from different sized regions of several image representations and then adopts the unscented transform to derive low-dimensional feature vectors from the computed descriptors. By doing so, it enables computations in the Euclidean space, and makes Gaussian mixture modeling feasible. In the last step a support vector machine classification scheme is used to make a decision regarding the identity of the modeled input 3D face image. The proposed framework exhibits several desirable characteristics, such as an inherent mechanism for data fusion/integration (through the region covariance matrices), the ability to examine the facial images at different levels of locality, and the ability to integrate domain-specific prior knowledge into the modeling procedure. We assess the feasibility of the proposed framework on the Face Recognition Grand Challenge version 2 (FRGCv2) database with highly encouraging results.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } The paper introduces a novel framework for 3D face recognition that capitalizes on region covariance descriptors and Gaussian mixture models. The framework presents an elegant and coherent way of combining multiple facial representations, while simultaneously examining all computed representations at various levels of locality. The framework first computes a number of region covariance matrices/descriptors from different sized regions of several image representations and then adopts the unscented transform to derive low-dimensional feature vectors from the computed descriptors. By doing so, it enables computations in the Euclidean space, and makes Gaussian mixture modeling feasible. In the last step a support vector machine classification scheme is used to make a decision regarding the identity of the modeled input 3D face image. The proposed framework exhibits several desirable characteristics, such as an inherent mechanism for data fusion/integration (through the region covariance matrices), the ability to examine the facial images at different levels of locality, and the ability to integrate domain-specific prior knowledge into the modeling procedure. We assess the feasibility of the proposed framework on the Face Recognition Grand Challenge version 2 (FRGCv2) database with highly encouraging results. |
Vildana Sulič Kenk; Janez Križaj; Vitomir Štruc; Simon Dobrišek Smart surveillance technologies in border control Journal Article In: European Journal of Law and Technology, 4 (2), 2013. @article{kenk2013smart, title = {Smart surveillance technologies in border control}, author = { Vildana Suli\v{c} Kenk and Janez Kri\v{z}aj and Vitomir \v{S}truc and Simon Dobri\v{s}ek}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/Kenk.pdf}, year = {2013}, date = {2013-01-01}, journal = {European Journal of Law and Technology}, volume = {4}, number = {2}, abstract = {The paper addresses the technical and legal aspects of the existing and forthcoming intelligent ('smart') surveillance technologies that are (or are considered to be) employed in the border control application area. Such technologies provide a computerized decision-making support to border control authorities, and are intended to increase the reliability and efficiency of border control measures. However, the question that arises is how effective these technologies are, as well as at what price, economically, socially, and in terms of citizens' rights. The paper provides a brief overview of smart surveillance technologies in border control applications, especially those used for controlling cross-border traffic, discusses possible proportionality issues and privacy risks raised by the increasingly widespread use of such technologies, as well as good/best practises developed in this area. In a broader context, the paper presents the result of the research carried out as part of the SMART (Scalable Measures for Automated Recognition Technologies) project.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The paper addresses the technical and legal aspects of the existing and forthcoming intelligent ('smart') surveillance technologies that are (or are considered to be) employed in the border control application area. Such technologies provide a computerized decision-making support to border control authorities, and are intended to increase the reliability and efficiency of border control measures. However, the question that arises is how effective these technologies are, as well as at what price, economically, socially, and in terms of citizens' rights. The paper provides a brief overview of smart surveillance technologies in border control applications, especially those used for controlling cross-border traffic, discusses possible proportionality issues and privacy risks raised by the increasingly widespread use of such technologies, as well as good/best practises developed in this area. In a broader context, the paper presents the result of the research carried out as part of the SMART (Scalable Measures for Automated Recognition Technologies) project. |
Janez Križaj; Vitomir Štruc; Simon Dobrišek Robust 3D Face Recognition Journal Article In: Electrotechnical Review, 79 (1-2), pp. 1-6, 2012. @article{Kri\v{z}aj-EV-2012, title = {Robust 3D Face Recognition}, author = {Janez Kri\v{z}aj and Vitomir \v{S}truc and Simon Dobri\v{s}ek}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/KrizajEV.pdf}, year = {2012}, date = {2012-06-01}, journal = {Electrotechnical Review}, volume = {79}, number = {1-2}, pages = {1-6}, abstract = {Face recognition in uncontrolled environments is hindered by variations in illumination, pose, expression and occlusions of faces. Many practical face-recognition systems are affected by these variations. One way to increase the robustness to illumination and pose variations is to use 3D facial images. In this paper 3D face-recognition systems are presented. Their structure and operation are described. The robustness of such systems to variations in uncontrolled environments is emphasized. We present some preliminary results of a system developed in our laboratory.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Face recognition in uncontrolled environments is hindered by variations in illumination, pose, expression and occlusions of faces. Many practical face-recognition systems are affected by these variations. One way to increase the robustness to illumination and pose variations is to use 3D facial images. In this paper 3D face-recognition systems are presented. Their structure and operation are described. The robustness of such systems to variations in uncontrolled environments is emphasized. We present some preliminary results of a system developed in our laboratory. |
Janez Križaj; Vitomir Štruc; Simon Dobrišek Towards robust 3D face verification using Gaussian mixture models Journal Article In: International Journal of Advanced Robotic Systems, 9 , 2012. @article{krizaj2012towards, title = {Towards robust 3D face verification using Gaussian mixture models}, author = { Janez Kri\v{z}aj and Vitomir \v{S}truc and Simon Dobri\v{s}ek}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/IntechJanez-1.pdf}, doi = {10.5772/52200}, year = {2012}, date = {2012-01-01}, journal = {International Journal of Advanced Robotic Systems}, volume = {9}, publisher = {InTech}, abstract = {This paper focuses on the use of Gaussian Mixture models (GMM) for 3D face verification. A special interest is taken in practical aspects of 3D face verification systems, where all steps of the verification procedure need to be automated and no meta-data, such as pre-annotated eye/nose/mouth positions, is available to the system. In such settings the performance of the verification system correlates heavily with the performance of the employed alignment (i.e., geometric normalization) procedure. We show that popular holistic as well as local recognition techniques, such as principal component analysis (PCA), or Scale-invariant feature transform (SIFT)-based methods considerably deteriorate in their performance when an “imperfect” geometric normalization procedure is used to align the 3D face scans and that in these situations GMMs should be preferred. Moreover, several possibilities to improve the performance and robustness of the classical GMM framework are presented and evaluated: i) explicit inclusion of spatial information, during the GMM construction procedure, ii) implicit inclusion of spatial information during the GMM construction procedure and iii) on-line evaluation and possible rejection of local feature vectors based on their likelihood. We successfully demonstrate the feasibility of the proposed modifications on the Face Recognition Grand Challenge data set.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This paper focuses on the use of Gaussian Mixture models (GMM) for 3D face verification. A special interest is taken in practical aspects of 3D face verification systems, where all steps of the verification procedure need to be automated and no meta-data, such as pre-annotated eye/nose/mouth positions, is available to the system. In such settings the performance of the verification system correlates heavily with the performance of the employed alignment (i.e., geometric normalization) procedure. We show that popular holistic as well as local recognition techniques, such as principal component analysis (PCA), or Scale-invariant feature transform (SIFT)-based methods considerably deteriorate in their performance when an “imperfect” geometric normalization procedure is used to align the 3D face scans and that in these situations GMMs should be preferred. Moreover, several possibilities to improve the performance and robustness of the classical GMM framework are presented and evaluated: i) explicit inclusion of spatial information, during the GMM construction procedure, ii) implicit inclusion of spatial information during the GMM construction procedure and iii) on-line evaluation and possible rejection of local feature vectors based on their likelihood. We successfully demonstrate the feasibility of the proposed modifications on the Face Recognition Grand Challenge data set. |
Janez Križaj; Vitomir Štruc; Nikola Pavešić Adaptation of SIFT Features for Robust Face Recognition Conference Proceedings of the 7th International Conference on Image Analysis and Recognition (ICIAR 2010), Povoa de Varzim, Portugal, 2010. @conference{ICIAR2010_Sift, title = {Adaptation of SIFT Features for Robust Face Recognition}, author = {Janez Kri\v{z}aj and Vitomir \v{S}truc and Nikola Pave\v{s}i\'{c}}, url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/FSIFT.pdf}, year = {2010}, date = {2010-06-01}, booktitle = {Proceedings of the 7th International Conference on Image Analysis and Recognition (ICIAR 2010)}, pages = {394-404}, address = {Povoa de Varzim, Portugal}, abstract = {The Scale Invariant Feature Transform (SIFT) is an algorithm used to detect and describe scale-, translation- and rotation-invariant local features in images. The original SIFT algorithm has been successfully applied in general object detection and recognition tasks, panorama stitching and others. One of its more recent uses also includes face recognition, where it was shown to deliver encouraging results. SIFT-based face recognition techniques found in the literature rely heavily on the so-called keypoint detector, which locates interest points in the given image that are ultimately used to compute the SIFT descriptors. While these descriptors are known to be among others (partially) invariant to illumination changes, the keypoint detector is not. Since varying illumination is one of the main issues affecting the performance of face recognition systems, the keypoint detector represents the main source of errors in face recognition systems relying on SIFT features. To overcome the presented shortcoming of SIFT-based methods, we present in this paper a novel face recognition technique that computes the SIFT descriptors at predefined (fixed) locations learned during the training stage. By doing so, it eliminates the need for keypoint detection on the test images and renders our approach more robust to illumination changes than related approaches from the literature. Experiments, performed on the Extended Yale B face database, show that the proposed technique compares favorably with several popular techniques from the literature in terms of performance.}, keywords = {}, pubstate = {published}, tppubtype = {conference} } The Scale Invariant Feature Transform (SIFT) is an algorithm used to detect and describe scale-, translation- and rotation-invariant local features in images. The original SIFT algorithm has been successfully applied in general object detection and recognition tasks, panorama stitching and others. One of its more recent uses also includes face recognition, where it was shown to deliver encouraging results. SIFT-based face recognition techniques found in the literature rely heavily on the so-called keypoint detector, which locates interest points in the given image that are ultimately used to compute the SIFT descriptors. While these descriptors are known to be among others (partially) invariant to illumination changes, the keypoint detector is not. Since varying illumination is one of the main issues affecting the performance of face recognition systems, the keypoint detector represents the main source of errors in face recognition systems relying on SIFT features. To overcome the presented shortcoming of SIFT-based methods, we present in this paper a novel face recognition technique that computes the SIFT descriptors at predefined (fixed) locations learned during the training stage. By doing so, it eliminates the need for keypoint detection on the test images and renders our approach more robust to illumination changes than related approaches from the literature. Experiments, performed on the Extended Yale B face database, show that the proposed technique compares favorably with several popular techniques from the literature in terms of performance. |