Research Interests
Computer Vision Knowledge Representation and Machine Learning Robotics and AutomationDr Feras Dayoub
Senior Lecturer
School of Computer Science and Information Technology
College of Engineering and Information Technology
I work at the intersection of computer vision, machine learning, and robotics, specializing in Embodied AI and Robotic Vision as part of the Australian Institute for Machine Learning (AIML) at the University of Adelaide. I co-direct CROSSING, a French-Australian laboratory focused on human-autonomous agent teaming. In addition to my role at AIML, I hold an Adjunct position at Queensland University of Technology (QUT) and serve as an Associate Investigator with the QUT Centre for Robotics. Previously, I was a Chief Investigator at the ARC Centre of Excellence for Robotic Vision. My research is dedicated to advancing the reliable deployment of computer vision and machine learning on mobile robots in real-world environments. I worked on applied robotic vision projects spanning agricultural innovation, environmental conservation, and autonomous infrastructure monitoring. As an educator, I am passionate about teaching programming, computer vision, machine learning, and robotic perception.
| Date | Position | Institution name |
|---|---|---|
| 2026 - ongoing | Associate Professor | Adelaide University |
| 2022 - 2025 | Senior Lecturer | University of Adelaide |
| 2022 - 2025 | Adjunct Senior Lecturer | Queensland University of Technology |
| 2019 - 2022 | Senior Lecturer | Queensland University of Technology |
| 2016 - 2019 | Centre Research Fellow | Center for Excellence in Robotic Robotics Vision (ACRV) |
| 2012 - 2016 | Postdoctoral Research Fellow | Queensland University of Technology |
| Year | Citation |
|---|---|
| 2023 | Pershouse, D., Dayoub, F., Miller, D., & Sünderhauf, N. (2023). Addressing the Challenges of Open-World Object Detection. |
| 2023 | Shi, X., Qiao, Y., Wu, Q., Liu, L., & Dayoub, F. (2023). Improving Online Source-free Domain Adaptation for Object Detection by Unsupervised Data Acquisition. |
| 2023 | Chapman, N. H., Dayoub, F., Browne, W., & Lehnert, C. (2023). Predicting Class Distribution Shift for Reliable Domain Adaptive Object Detection. IEEE Robotics and Automation Letters, 8(8), 1-8. Scopus6 WoS4 |
| 2022 | Hall, D., Talbot, B., Bista, S. R., Zhang, H., Smith, R., Dayoub, F., & Sünderhauf, N. (2022). BenchBot environments for active robotics (BEAR): Simulated data for active scene understanding research. International Journal of Robotics Research, 41(3), 259-269. Scopus3 WoS3 |
| 2022 | Rahman, Q. M., Sunderhauf, N., Corke, P., & Dayoub, F. (2022). FSNet: A Failure Detection Framework for Semantic Segmentation. IEEE Robotics and Automation Letters, 7(2), 1-8. Scopus16 WoS14 |
| 2022 | Miller, D., Sunderhauf, N., Milford, M., & Dayoub, F. (2022). Uncertainty for identifying open-set errors in visual object detection. IEEE Robotics and Automation Letters, 7(1), 215-222. Scopus36 WoS30 |
| 2021 | Talbot, B., Dayoub, F., Corke, P., & Wyeth, G. (2021). Robot navigation in unseen spaces using an abstract map. IEEE Transactions on Cognitive and Developmental Systems, 13(4), 791-805. Scopus17 WoS13 |
| 2021 | Rahman, Q. M., Corke, P., & Dayoub, F. (2021). Run-time monitoring of machine learning for robotic perception: a survey of emerging trends. IEEE Access, 9, 20067-20075. Scopus53 WoS46 |
| 2020 | Haviland, J., Dayoub, F., & Corke, P. (2020). Control of the Final-Phase of Closed-Loop Visual Grasping using Image-Based Visual Servoing. |
| 2020 | Arain, B., Dayoub, F., Rigby, P., & Dunbabin, M. (2020). Close-Proximity Underwater Terrain Mapping Using Learning-based Coarse Range Estimation. |
| 2019 | Skinner, J., Hall, D., Zhang, H., Dayoub, F., & Sünderhauf, N. (2019). The Probabilistic Object Detection Challenge. |
| 2019 | Sünderhauf, N., Dayoub, F., Hall, D., Skinner, J., Zhang, H., Carneiro, G., & Corke, P. (2019). A probabilistic challenge for object detection. Nature Machine Intelligence, 1(9), 443. WoS3 |
| 2018 | Ahn, H. S., Sa, I., & Dayoub, F. (2018). Introduction to the Special Issue on Precision Agricultural Robotics and Autonomous Farming Technologies. IEEE Robotics and Automation Letters, 3(4), 4435-4438. Scopus5 WoS3 |
| 2018 | Hall, D., Dayoub, F., Perez, T., & McCool, C. (2018). A rapidly deployable classification system using visual data for the application of precision weed management. Computers and Electronics in Agriculture, 148, 107-120. Scopus22 WoS16 Europe PMC2 |
| 2017 | Bawden, O., Kulk, J., Russell, R., McCool, C., English, A., Dayoub, F., . . . Perez, T. (2017). Robot for weed species plant-specific management. Journal of Field Robotics, 34(6), 1179-1199. Scopus167 WoS139 |
| 2017 | Sa, I., Lehnert, C., English, A., McCool, C., Dayoub, F., Upcroft, B., & Perez, T. (2017). Peduncle Detection of Sweet Pepper for Autonomous Crop Harvesting-Combined Color and 3-D Information. IEEE Robotics and Automation Letters, 2(2), 765-772. Scopus107 WoS88 |
| 2016 | Sa, I., Ge, Z., Dayoub, F., Upcroft, B., Perez, T., & McCool, C. (2016). Deepfruits: A fruit detection system using deep neural networks. Sensors (Switzerland), 16(8), 1-23. Scopus996 WoS726 Europe PMC207 |
| 2015 | Dayoub, F., Morris, T., & Corke, P. (2015). Rubbing shoulders with mobile service robots. IEEE Access, 3, 333-342. Scopus8 WoS8 |
| 2011 | Dayoub, F., Cielniak, G., & Duckett, T. (2011). Long-term experiments with an adaptive spherical view representation for navigation in changing environments. Robotics and Autonomous Systems, 59(5), 285-295. Scopus50 WoS42 |
| - | Clement, B., Dubromel, M., Santos, P. E., Sammut, K., Oppert, M., & Dayoub, F. (2024). Hybrid Navigation Acceptability and Safety. Proceedings of the AAAI Symposium Series, 2(1), 11-17. |
| Year | Citation |
|---|---|
| 2020 | Garg, S., Sünderhauf, N., Dayoub, F., Morrison, D., Cosgun, A., Carneiro, G., . . . Milford, M. (2020). Semantics for Robotic Mapping, Perception and Interaction: A Survey (Vol. 8). United States: Now Publishers. DOI |
| Year | Citation |
|---|---|
| 2025 | Shi, X., Qiao, Y., Wu, Q., Liu, L., & Dayoub, F. (2025). Improving Online Source-Free Domain Adaptation for Object Detection by Unsupervised Data Acquisition. In A. DelBue, C. Canton, J. Pont-Tuset, & T. Tommasi (Eds.), Lecture Notes in Computer Science (Vol. 15629 LNCS, pp. 195-205). SPRINGER INTERNATIONAL PUBLISHING AG. DOI Scopus1 WoS1 |
| 2017 | Perez, T., Bawden, O., Kulk, J., Russell, R., McCool, C., English, A., & Dayoub, F. (2017). Overview of mechatronic design for a weed-management robotic system. In D. Zhang, & B. Wei (Eds.), Robotics and Mechatronics for Agriculture (1st ed., pp. 23-49). Boca Raton, USA: CRC Press. DOI |
| 2015 | Dayoub, F., Cielniak, G., & Duckett, T. (2015). Eight weeks of episodic visual navigation inside a non-stationary environment using adaptive spherical views. In L. Mejias, P. Corke, & J. Roberts (Eds.), Springer Tracts in Advanced Robotics (Vol. 105, pp. 379-392). SPRINGER-VERLAG BERLIN. DOI Scopus2 WoS2 |
| 2011 | Dayoub, F., Cielniak, G., & Duckett, T. (2011). Long-term experiment using an adaptive appearance-based map for visual navigation by mobile robots. In Lecture Notes in Computer Science (Vol. 6856 LNAI, pp. 400-401). Springer Berlin Heidelberg. DOI Scopus1 |
| Year | Citation |
|---|---|
| 2025 | Podgorski, S., Garg, S., Hosseinzadeh, M., Mares, L., Dayoub, F., & Reid, I. (2025). TANGO: Traversability-Aware Navigation with Local Metric Control for Topological Goals. In 2025 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2399-2406). Atlanta, GA, USA: IEEE. DOI |
| 2025 | Deng, J., He, T., Jiang, L., Wang, T., Dayoub, F., & Reid, I. (2025). 3D-LLaVA: Towards Generalist 3D LMMs with Omni Superpoint Transformer. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 3772-3782). TN, Nashville: IEEE COMPUTER SOC. DOI |
| 2025 | Lin, C. -J., Garg, S., Chin, T. -J., & Dayoub, F. (2025). Robust Scene Change Detection Using Visual Foundation Models and Cross-Attention Mechanisms. In 2025 IEEE International Conference on Robotics and Automation (ICRA) (pp. 8337-8343). Atlanta, GA, USA: IEEE. DOI |
| 2025 | Zhang, W., Li, Y., Qiao, Y., Huang, S., Liu, J., Dayoub, F., . . . Liu, L. (2025). Effective Tuning Strategies for Generalist Robot Manipulation Policies. In 2025 IEEE International Conference on Robotics and Automation (ICRA) (pp. 7255-7262). Atlanta, GA, USA: IEEE. DOI |
| 2025 | Abraham, S. S., Garg, S., & Dayoub, F. (2025). To Ask or Not to Ask? Detecting Absence of Information in Vision and Language Navigation. In Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025 (pp. 7480-7489). Tucson, AZ, USA Funding Agency: Authors Savitha Sam Abraham Australian Institute for Machine Learning, The University of Adelaide, Australia Sourav Garg Australian Institute for Machine Learning, The University of Adelaide, Australia Feras Dayoub Australian Institute for Machine Learning, The University of Adelaide, Australia Figures References Keywords Metrics Contact IEEE to Subscribe: IEEE. DOI |
| 2025 | Chapman, N. H., Lehnert, C., Browne, W., & Dayoub, F. (2025). Enhancing Embodied Object Detection with Spatial Feature Memory. In Proceedings - 2025 IEEE Winter Conference on Applications of Computer Vision, WACV 2025 (pp. 6921-6931). Tucson, AZ, USA: IEEE. DOI Scopus1 |
| 2024 | McLeod, S., Chng, C. K., Ono, T., Shimizu, Y., Hemmi, R., Holden, L., . . . Chin, T. J. (2024). Robust Perspective-n-Crater for Crater-based Camera Pose Estimation. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (pp. 6760-6769). Seattle: IEEE. DOI Scopus3 WoS2 |
| 2024 | Yuan, D., Maire, F., & Dayoub, F. (2024). Temporal Attention for Cross-View Sequential Image Localization. In IEEE International Conference on Intelligent Robots and Systems (pp. 7429-7436). Abu Dhabi, United Arab Emirates: IEEE. DOI |
| 2024 | Abou-Chakra, J., Rana, K., Dayoub, F., & Sünderhauf, N. (2024). Physically Embodied Gaussian Splatting: A Visually Learnt and Physically Grounded 3D Representation for Robotics. In Proceedings of Machine Learning Research Vol. 270 (pp. 513-530). Munich, Germany: ML Research Press. Scopus1 |
| 2024 | Wilson, S., Fischer, T., Dayoub, F., Miller, D., & Sünderhauf, N. (2024). SAFE: Sensitivity-Aware Features for Out-of-Distribution Object Detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV 2023) (pp. 23508-23519). online: IEEE. DOI Scopus26 WoS18 |
| 2024 | Wu, R., Wang, H., Dayoub, F., & Chen, H. T. (2024). Segment beyond View: Handling Partially Missing Modality for Audio-Visual Semantic Segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence Vol. 38 (pp. 6100-6108). Online: Association for the Advancement of Artificial Intelligence (AAAI). DOI Scopus6 WoS2 |
| 2024 | Yuan, D., Maire, F., & Dayoub, F. (2024). Cross-Attention between Satellite and Ground Views for Enhanced Fine-Grained Robot Geo-Localization. In Proceedings - 2024 IEEE Winter Conference on Applications of Computer Vision, WACV 2024 (pp. 1238-1245). Online: IEEE. DOI Scopus5 |
| 2024 | Abou-Chakra, J., Dayoub, F., & Sunderhauf, N. (2024). ParticleNeRF: A Particle-Based Encoding for Online Neural Radiance Fields. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2024) (pp. 5963-5972). Online: IEEE. DOI Scopus10 WoS1 |
| 2024 | Garg, S., Rana, K., Hosseinzadeh, M., Mares, L., Sünderhauf, N., Dayoub, F., & Reid, I. (2024). RoboHop: Segment-based Topological Map Representation for Open-World Visual Navigation. In Proceedings - IEEE International Conference on Robotics and Automation Vol. 35 (pp. 4090-4097). Yokohama, Japan: IEEE. DOI Scopus9 WoS4 |
| 2023 | Wilson, S., Fischer, T., Sunderhauf, N., & Dayoub, F. (2023). Hyperdimensional Feature Fusion for Out-of-Distribution Detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV 2023) (pp. 2643-2653). Online: IEEE. DOI Scopus13 WoS12 |
| 2022 | Corke, P., Dayoub, F., Hall, D., Skinner, J., & Sünderhauf, N. (2022). What Can Robotics Research Learn from Computer Vision Research?. In Proceedings of the 19th International Symposium of Robotic Research (ISRR 2019), as published in Springer Proceedings in Advanced Robotics Vol. 20 (pp. 987-1003). Cham, Switzerland: Springer. DOI |
| 2021 | Moskvyak, O., Maire, F., Dayoub, F., Armstrong, A. O., & Baktashmotlagh, M. (2021). Robust Re-identification of Manta Rays from Natural Markings by Learning Pose Invariant Embeddings. In DICTA 2021 - 2021 International Conference on Digital Image Computing: Techniques and Applications (pp. 1-8). online: IEEE. DOI Scopus26 WoS14 |
| 2021 | Bista, S. R., Hall, D., Talbot, B., Zhang, H., Dayoub, F., & Sünderhauf, N. (2021). Evaluating the Impact of Semantic Segmentation and Pose Estimation on Dense Semantic SLAM. In 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 5328-5335). online: IEEE. DOI Scopus9 WoS7 |
| 2021 | Miller, D., Sunderhauf, N., Milford, M., & Dayoub, F. (2021). Class anchor clustering: A loss for distance-based open set recognition. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision (WACV 2021) (pp. 3569-3577). online: IEEE. DOI Scopus138 WoS102 |
| 2021 | Moskvyak, O., Maire, F., Dayoub, F., & Baktashmotlagh, M. (2021). Keypoint-aligned embeddings for image retrieval and re-identification. In Proceedings - 2021 IEEE Winter Conference on Applications of Computer Vision, WACV 2021 (pp. 676-685). online: IEEE. DOI Scopus24 WoS20 |
| 2021 | Zhang, H., Wang, Y., Dayoub, F., & Sünderhauf, N. (2021). VarifocalNet: An IoU-aware Dense Object Detector. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 8510-8519). online: IEEE. DOI Scopus940 WoS779 |
| 2021 | Rahman, Q. M., Sunderhauf, N., & Dayoub, F. (2021). Per-frame mAP Prediction for Continuous Performance Monitoring of Object Detection during Deployment. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW 2021) (pp. 152-160). online: IEEE. DOI Scopus16 WoS16 |
| 2021 | Rahman, Q. M., Sünderhauf, N., & Dayoub, F. (2021). Online Monitoring of Object Detection Performance During Deployment. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021) (pp. 4839-4845). online: IEEE. DOI Scopus11 WoS10 |
| 2021 | Moskvyak, O., Maire, F., Dayoub, F., & Baktashmotlagh, M. (2021). SEMI-SUPERVISED KEYPOINT LOCALIZATION. In ICLR 2021 - 9th International Conference on Learning Representations (pp. 1-11). Virtual only: Open Review. Scopus15 |
| 2020 | Moskvyak, O., Maire, F., Dayoub, F., & Baktashmotlagh, M. (2020). Learning Landmark Guided Embeddings for Animal Re-identification. In 2020 IEEE Winter Applications of Computer Vision Workshops (WACVW) (pp. 12-19). online: IEEE. DOI Scopus10 WoS11 |
| 2020 | Hall, D., Dayoub, F., Skinner, J., Zhang, H., Miller, D., Corke, P., . . . Sunderhauf, N. (2020). Probabilistic object detection: Definition and evaluation. In Proceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020 (pp. 1020-1029). online: IEEE. DOI Scopus91 WoS69 |
| 2019 | Halodova, L., Dvorrakova, E., Majer, F., Vintr, T., Mozos, O. M., Dayoub, F., & Krajnik, T. (2019). Predictive and adaptive maps for long-term visual navigation in changing environments. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 7033-7039). Macau, China: IEEE. DOI Scopus22 WoS18 |
| 2019 | Miller, D., Dayoub, F., Milford, M., & Sunderhauf, N. (2019). Evaluating merging strategies for sampling-based uncertainty techniques in object detection. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2019) (pp. 2348-2354). online: IEEE. DOI Scopus98 WoS75 |
| 2019 | Rahman, Q. M., Sunderhauf, N., & Dayoub, F. (2019). Did You Miss the Sign? A False Negative Alarm System for Traffic Sign Detectors. In 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3748-3753). online: IEEE. DOI Scopus23 WoS23 |
| 2019 | Miller, D., Sünderhauf, N., Zhang, H., Hall, D., & Dayoub, F. (2019). Benchmarking sampling-based probabilistic object detectors. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops Vol. 2019-June (pp. 42-45). Scopus12 |
| 2018 | Abbas, A., Maire, F., Shirazi, S., Dayoub, F., & Eich, M. (2018). A dynamic planner for object assembly tasks based on learning the spatial relationships of its parts from a single demonstration. In T. Mitrovic, B. Xue, & X. Li (Eds.), Proceedings of AI 2018: Advanced in Artificial Intelligence 31st Australasian Joint Conference Vol. 11320 LNAI (pp. 759-765). Wellington, New Zealand: Springer International Publishing. DOI Scopus2 |
| 2018 | McFadyen, A., Dayoub, F., Martin, S., Ford, J., & Corke, P. (2018). Assisted Control for Semi-Autonomous Power Infrastructure Inspection Using Aerial Vehicles. In IEEE International Conference on Intelligent Robots and Systems (pp. 5719-5726). online: IEEE. DOI Scopus3 WoS2 |
| 2018 | Miller, D., Nicholson, L., Dayoub, F., & Sunderhauf, N. (2018). Dropout Sampling for Robust Object Detection in Open-Set Conditions. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA 2018) (pp. 3243-3249). Brisbane, Australia: IEEE. DOI Scopus218 WoS179 |
| 2018 | Abbas, A., Maire, F., Dayoub, F., & Shirazi, S. (2018). Combining learning from demonstration and search algorithm for dynamic goal-directed assembly task planning. In ACRA 2018 Proceedings Vol. 2018-December. Online: Australian Robotics and Automation Association. |
| 2017 | Dayoub, F., Sunderhauf, N., & Corke, P. I. (2017). Episode-Based Active Learning with Bayesian Neural Networks. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW 2017) Vol. 2017-July (pp. 498-500). Honolulu, Hawaii, USA: IEEE. DOI Scopus7 WoS5 |
| 2017 | Hall, D., Dayoub, F., Kulk, J., & McCool, C. (2017). Towards unsupervised weed scouting for agricultural robotics. In Proceedings - IEEE International Conference on Robotics and Automation (pp. 5223-5230). online: IEEE. DOI Scopus36 |
| 2017 | Hall, D., Dayoub, F., Perez, T., & McCool, C. (2017). A transplantable system for weed classification by agricultural robotics. In 2017 IEEE International Conference on Intelligent Robots and Systems Vol. 2017-September (pp. 5174-5179). Vancouver, BC, Canada: IEEE. DOI Scopus6 WoS4 |
| 2016 | Sunderhauf, N., Dayoub, F., McMahon, S., Talbot, B., Schulz, R., Corke, P., . . . Milford, M. (2016). Place categorization and semantic mapping on a mobile robot. In Proceedings - IEEE International Conference on Robotics and Automation Vol. 2016-June (pp. 5729-5736). online: IEEE. DOI Scopus129 WoS96 |
| 2016 | McCool, C., Sa, I., Dayoub, F., Lehnert, C., Perez, T., & Upcroft, B. (2016). Visual detection of occluded crop: For automated harvesting. In Proceedings - IEEE International Conference on Robotics and Automation Vol. 2016-June (pp. 2506-2512). Stockholm, Sweden: IEEE. DOI Scopus68 WoS54 |
| 2016 | Talbot, B., Lam, O., Schulz, R., Dayoub, F., Upcroft, B., & Wyeth, G. (2016). Find my office: Navigating real space from semantic descriptions. In Proceedings - IEEE International Conference on Robotics and Automation Vol. 2016-June (pp. 5782-5787). online: IEEE. DOI Scopus19 WoS12 |
| 2015 | Schulz, R., Talbot, B., Lam, O., Dayoub, F., Corke, P., Upcroft, B., & Wyeth, G. (2015). Robot navigation using human cues: A robot navigation system for symbolic goal-directed exploration. In Proceedings IEEE International Conference on Robotics and Automation Vol. 2015-June (pp. 1100-1105). Seattle, WA: IEEE COMPUTER SOC. DOI Scopus28 WoS16 |
| 2015 | Dayoub, F., Dunbabin, M., & Corke, P. (2015). Robotic detection and tracking of Crown-of-Thorns starfish. In IEEE International Conference on Intelligent Robots and Systems Vol. 2015-December (pp. 1921-1928). Hamburg, GERMANY: IEEE. DOI Scopus34 WoS23 |
| 2015 | Sünderhauf, N., Shirazi, S., Jacobson, A., Dayoub, F., Pepperell, E., Upcroft, B., & Milford, M. (2015). Place recognition with convnet landmarks: Viewpoint-robust, condition-robust, training-free. In Proceedings of the Robotics: Science and Systems XI Conference (RSS 2015) Vol. 11 (pp. 1-10). online: Robotics: Science and Systems Foundation. DOI Scopus330 WoS198 |
| 2015 | Hall, D., McCool, C., Dayoub, F., Sünderhauf, N., & Upcroft, B. (2015). Evaluation of features for leaf classification in challenging conditions. In Proceedings 2015 IEEE Winter Conference on Applications of Computer Vision Wacv 2015 (pp. 797-804). HI, Waikoloa: IEEE. DOI Scopus114 WoS79 |
| 2015 | Sünderhauf, N., Shirazi, S., Dayoub, F., Upcroft, B., & Milford, M. (2015). On the performance of ConvNet features for place recognition. In IEEE International Conference on Intelligent Robots and Systems Vol. 2015-December (pp. 4297-4304). Hamburg, GERMANY: IEEE. DOI Scopus476 WoS387 |
| 2015 | Lam, O., Dayoub, F., Schulz, R., & Corke, P. (2015). Automated topometric graph generation from floor plan analysis. In Australasian Conference on Robotics and Automation Acra. Scopus9 |
| 2014 | Lam, O., Dayoub, F., Schulz, R., & Corke, P. (2014). Text recognition approaches for indoor robotics: A comparison. In Australasian Conference on Robotics and Automation Acra Vol. 02-04-December-2014. Scopus4 |
| 2014 | Morris, T., Dayoub, F., Corke, P., & Upcroft, B. (2014). Simultaneous localization and planning on multiple map hypotheses. In IEEE International Conference on Intelligent Robots and Systems (pp. 4531-4536). Chicago, IL: IEEE. DOI Scopus5 WoS5 |
| 2014 | Morris, T., Dayoub, F., Corke, P., Wyeth, G., & Upcroft, B. (2014). Multiple map hypotheses for planning and navigating in non-stationary environments. In Proceedings IEEE International Conference on Robotics and Automation (pp. 2765-2770). Hong Kong, PEOPLES R CHINA: IEEE. DOI Scopus19 WoS15 |
| 2013 | Dayoub, F., Morris, T., Upcroft, B., & Corke, P. (2013). Vision-only autonomous navigation using topometric maps. In N. Amato (Ed.), IEEE International Conference on Intelligent Robots and Systems (pp. 1923-1929). JAPAN, Tokyo: IEEE. DOI Scopus35 WoS23 |
| 2013 | Dayoub, F., Morris, T., Upcroft, B., & Corke, P. (2013). One Robot, eight hours, and twenty four thousand people. In Australasian Conference on Robotics and Automation Acra. Scopus1 |
| 2010 | Dayoub, F., Duckett, T., & Cielniak, G. (2010). Short- and long-term adaptation of visual place memories for mobile robots. In Proceedings of the International Symposium on Remembering Who We are Human Memory for Artificial Agents A Symposium at the Aisb 2010 Convention (pp. 21-26). Scopus2 |
| 2008 | Dayoub, F., & Duckett, T. (2008). An adaptive appearance-based map for long-term topological localization of mobile robots. In R. Chatila, A. Kelly, & J. P. Merlet (Eds.), 2008 IEEE Rsj International Conference on Intelligent Robots and Systems Iros (pp. 3364-3369). Nice, FRANCE: IEEE. DOI Scopus79 WoS55 |
| Year | Citation |
|---|---|
| 2024 | Mallick, P., Dayoub, F., & Sherrah, J. (2024). Wasserstein Distance-based Expansion of Low-Density Latent Regions for Unknown Class Detection. |
| 2024 | Yuan, D., Maire, F., & Dayoub, F. (2024). Temporal Attention for Cross-View Sequential Image Localization. |
| 2024 | Lin, C. -J., Garg, S., Chin, T. -J., & Dayoub, F. (2024). Robust Scene Change Detection Using Visual Foundation Models and Cross-Attention Mechanisms. |
| 2024 | Bockman, J., Howe, M., Orenstein, A., & Dayoub, F. (2024). AARK: An Open Toolkit for Autonomous Racing Research. |
| 2024 | Garg, S., Rana, K., Hosseinzadeh, M., Mares, L., Sünderhauf, N., Dayoub, F., & Reid, I. (2024). RoboHop: Segment-based Topological Map Representation for Open-World Visual Navigation. |
| 2024 | Abraham, S. S., Garg, S., & Dayoub, F. (2024). To Ask or Not to Ask? Detecting Absence of Information in Vision and Language Navigation. |
| 2024 | Abou-Chakra, J., Rana, K., Dayoub, F., & Sünderhauf, N. (2024). Physically Embodied Gaussian Splatting: A Realtime Correctable World Model for Robotics. |
| 2023 | Holden, L., Dayoub, F., Harvey, D., & Chin, T. -J. (2023). Federated Neural Radiance Fields. |
| 2023 | Wu, R., Wang, H., Dayoub, F., & Chen, H. -T. (2023). Segment Beyond View: Handling Partially Missing Modality for Audio-Visual Semantic Segmentation. |
| 2022 | Wilson, S., Fischer, T., Dayoub, F., Miller, D., & Sünderhauf, N. (2022). SAFE: Sensitivity-Aware Features for Out-of-Distribution Object Detection. |
| Date | Role | Research Topic | Program | Degree Type | Student Load | Student Name |
|---|---|---|---|---|---|---|
| 2025 | Co-Supervisor | Exploring Human-Robot Interaction as a Collaborative Model for One-to-Many Teaming in Aerospace Environments | Doctor of Philosophy | Doctorate | Full Time | Mrs Donna Lee Duffy |
| 2025 | Co-Supervisor | Low-Cost Implementation of Visual Pseudo-Tactile and Multitasking Imitation Learning | Master of Philosophy | Master | Full Time | Mr Yukun Chen |
| 2024 | Principal Supervisor | Foundation models for goal-oriented reinforcement learning and intrinsic exploration | Master of Philosophy | Master | Full Time | Mr Dustin Wyly Craggs |
| 2024 | Principal Supervisor | Robust Machine Learning Techniques for RF Signal Classification on Sparse and Noisy Digitally Sampled Radar Data | Master of Philosophy | Master | Full Time | Mr Sebastian Luke McCormack Cocks |
| 2024 | Co-Supervisor | Hand-held Object Identification, Segmentation, and Tracking in the Wild | Doctor of Philosophy | Doctorate | Full Time | Mr Huy Anh Nguyen |
| 2024 | Principal Supervisor | Data-driven physically plausible dexterous manipulation | Doctor of Philosophy | Doctorate | Full Time | Mr King Hang Wong |
| 2023 | Principal Supervisor | 3D Scene Understanding and Change Tracking | Doctor of Philosophy | Doctorate | Full Time | Mr Chun-Jung Lin |
| 2023 | Co-Supervisor | 3D indoor Scene Reconstruction | Doctor of Philosophy | Doctorate | Full Time | Mr Wenbo Zhang |
| 2022 | Co-Supervisor | Enhancing Mars rovers using AI-enabled robotic vision | Doctor of Philosophy | Doctorate | Full Time | Mr Lachlan William Holden |
| Date | Role | Research Topic | Program | Degree Type | Student Load | Student Name |
|---|---|---|---|---|---|---|
| 2023 - 2025 | Principal Supervisor | Domain Adaptation Object Detection for Mobile Robots | Master of Philosophy | Master | Full Time | Mr Xiangyu Shi |
| 2022 - 2024 | Co-Supervisor | Towards Pedestrian Safety Augmented Reality System | Master of Philosophy | Master | Full Time | Mr Renjie Wu |