Dr Hemanth Saratchandran
Senior Research Fellow
School of Computer Science and Information Technology
College of Engineering and Information Technology
I am a mathematician primarily working in the areas of Artificial Intelligence/Deep Learning, Computer Vision, Optimization, Geometry and Topology.
| Date | Position | Institution name |
|---|---|---|
| 2022 - ongoing | Research Fellow | Australian Institute of Machine Learning |
| 2020 - 2022 | Laureate Research Associate | University of Adelaide |
| 2019 - 2020 | Scientific Assistant (Wissenschaftlicher Mitarbeiter, TV-L 13, full time) | University of Augsburg |
| 2016 - 2018 | Postdoctoral Research Fellow | Beijing International Centre for Mathematical Research |
| Language | Competency |
|---|---|
| English | Can read, write, speak, understand spoken and peer review |
| Date | Institution name | Country | Title |
|---|---|---|---|
| 2011 - 2016 | Univeristy of Oxford | United Kingdom | Dphil |
| 2006 - 2010 | Australian National University | Australia | Bachelors of Advanced Science (with Honours) |
| Year | Citation |
|---|---|
| 2025 | Gordon, C., E. MacDonald, L., Saratchandran, H., & Lucey, S. (2025). D’OH: Decoder-Only Random Hypernetworks for Implicit Neural Representations. Lecture Notes in Computer Science Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics, 15478 LNCS, 128-147. |
| 2025 | Albert, P., Zhang, F. Z., Saratchandran, H., Hengel, A. V. D., & Abbasnejad, E. (2025). Towards Higher Effective Rank in Parameter-efficient Fine-tuning using Khatri-Rao Product.. CoRR, abs/2508.00230. |
| 2025 | Yao, J., Mitchell, L., Maclean, J., & Saratchandran, H. (2025). Data Denoising and Derivative Estimation for Data-Driven Modeling of Nonlinear Dynamical Systems.. CoRR, abs/2509.14219. |
| 2025 | Garg, A., Saratchandran, H., Garg, R., & Lucey, S. (2025). Stable Forgetting: Bounded Parameter-Efficient Unlearning in LLMs.. CoRR, abs/2509.24166. |
| 2024 | Saratchandran, H., Ramasinghe, S., Shevchenko, V., Long, A., & Lucey, S. (2024). A Sampling Theory Perspective on Activations for Implicit Neural Representations. Proceedings of Machine Learning Research, 235, 43422-43444. Scopus1 |
| 2024 | Saratchandran, H., Ch'ng, S. -F., & Lucey, S. (2024). Architectural Strategies for the optimization of Physics-Informed Neural Networks.. CoRR, abs/2402.02711. |
| 2023 | Bandara, L., Goffeng, M., & Saratchandran, H. (2023). Realisations of elliptic operators on compact manifolds with boundary. Advances in Mathematics, 420, 108968. Scopus3 WoS3 |
| 2023 | Hochs, P., & Saratchandran, H. (2023). A Ruelle dynamical zeta function for equivariant flows. |
| 2023 | Ramasinghe, S., Saratchandran, H., Shevchenko, V., & Lucey, S. (2023). On the effectiveness of neural priors in modeling dynamical systems.. CoRR, abs/2303.05728. |
| 2022 | Milatovic, O., & Saratchandran, H. (2022). Generalized Ornstein–Uhlenbeck semigroups in weighted Lp-spaces on Riemannian manifolds. Journal of Functional Analysis, 283(8), 62 pages. |
| 2022 | Saratchandran, H., Zhang, J., & Zhang, P. (2022). A New Higher Order Yang-Mills-Higgs Flow on Riemannian 4-Manifolds. Bulletin of the Australian Mathematical Society, 107(2), 320-329. |
| 2022 | Hochs, P., & Saratchandran, H. (2022). Equivariant analytic torsion for proper actions. |
| 2022 | Ramasinghe, S., MacDonald, L. E., Farazi, M. R., Saratchandran, H., & Lucey, S. (2022). How You Start Matters for Generalization.. CoRR, abs/2206.08558. |
| 2021 | MacDonald, L., Mathai, V., & Saratchandran, H. (2021). On the Chern character in Higher Twisted K-theory and spherical T-duality. Communications in Mathematical Physics, 385(1), 331-368. Scopus1 WoS1 |
| 2021 | Saratchandran, H. (2021). Essential self-adjointness of perturbed quadharmonic operators on Riemannian manifolds with an application to the separation problem. Mathematische Nachrichten, 294(5), 997-1044. Scopus1 WoS1 |
| 2021 | Milatovic, O., & Saratchandran, H. (2021). Essential Self-Adjointness of Perturbed Biharmonic Operators via Conformally Transformed Metrics. Potential Analysis, 56(4), 623-647. |
| 2019 | Saratchandran, H. (2019). Higher order Seiberg–Witten functionals and their associated gradient flows. Manuscripta Mathematica, 160(3-4), 411-481. Scopus2 WoS2 |
| 2019 | Milatovic, O., & Saratchandran, H. (2019). Inequalities and separation for covariant Schrödinger operators. Journal of Geometry and Physics, 138, 215-222. Scopus2 WoS2 |
| 2018 | Saratchandran, H. (2018). Finite volume hyperbolic complements of 2-tori and Klein bottles in closed smooth simply connected 4-manifolds. New York Journal of Mathematics, 24, 443-450. Scopus2 WoS1 |
| 2017 | Bandara, L., & Saratchandran, H. (2017). Essential self-adjointness of powers of first-order differential operators on non-compact manifolds with low-regularity metrics. Journal of Functional Analysis, 273(12), 3719-3758. Scopus6 WoS6 |
| 2016 | Saratchandran, H. (2016). Kirby diagrams and the Ratcliffe-Tschantz hyperbolic 4-manifolds. Topology and its Applications, 202, 301-317. Scopus1 WoS1 |
| 2015 | Saratchandran, H. (2015). A four dimensional hyperbolic link complement in a standard $S^4$. |
| 2015 | Saratchandran, H. (2015). A four dimensional hyperbolic link complement in a standard $S^2 \times S^2$. |
| 2015 | Saratchandran, H. (2015). Complements of tori in $\#_{2k}S^2 \times S^2$ that admit a hyperbolic structure. |
| Year | Citation |
|---|---|
| 2025 | Ch'ng, S. -F., Saratchandran, H., & Lucey, S. (2025). Preconditioners for the Stochastic Training of Neural Fields.. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 27222-27232). Nashville, TN, USA: Computer Vision Foundation / IEEE. DOI |
| 2025 | Ji, Y., Saratchandran, H., Moghaddam, P., & Lucey, S. (2025). Always Skip Attention. |
| 2025 | Ji, Y., Saratchandran, H., Gordon, C., Zhang, Z., & Lucey, S. (2025). EFFICIENT LEARNING WITH SINE-ACTIVATED LOW-RANK MATRICES. In 13th International Conference on Learning Representations Iclr 2025 (pp. 27194-27215). OpenReview.net. Scopus1 |
| 2025 | Albert, P., Zhang, F. Z., Saratchandran, H., Opazo, C. R., Hengel, A. V. D., & Abbasnejad, E. (2025). RandLoRA: Full rank parameter-efficient fine-tuning of large models.. In ICLR. OpenReview.net. |
| 2024 | Saratchandran, H., Ramasinghe, S., Shevchenko, V., Long, A., & Lucey, S. (2024). A sampling theory perspective on activations for implicit neural representations.. In ICML (pp. 23 pages). Vienna, Austria: OpenReview.net. |
| 2024 | Saratchandran, H., Chng, S. -F., Ramasinghe, S., MacDonald, L. E., & Lucey, S. (2024). Curvature-Aware Training for Coordinate Networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV 2023) (pp. 13282-13292). online: IEEE. DOI Scopus1 WoS1 |
| 2024 | Saratchandran, H., Ramasinghe, S., & Lucey, S. (2024). From Activation to Initialization: Scaling Insights for Optimizing Neural Fields.. In CVPR (pp. 413-422). Seattle, WA, USA: IEEE. |
| 2024 | Ch'ng, S. -F., Garg, R., Saratchandran, H., & Lucey, S. (2024). Invertible Neural Warp for NeRF.. In A. Leonardis, E. Ricci, S. Roth, O. Russakovsky, T. Sattler, & G. Varol (Eds.), ECCV (17) Vol. 15075 (pp. 405-421). Springer. |
| 2024 | Saratchandran, H., Wang, T. X., & Lucey, S. (2024). Weight Conditioning for Smooth Optimization of Neural Networks.. In A. Leonardis, E. Ricci, S. Roth, O. Russakovsky, T. Sattler, & G. Varol (Eds.), ECCV (85) Vol. 15143 (pp. 310-325). Springer. |
| 2024 | Gordon, C., MacDonald, L. E., Saratchandran, H., & Lucey, S. (2024). D'OH: Decoder-Only Random Hypernetworks for Implicit Neural Representations.. In M. Cho, I. Laptev, D. Tran, A. Yao, & H. Zha (Eds.), ACCV (7) Vol. 15478 (pp. 128-147). Springer. |
| 2023 | MacDonald, L. E., Valmadre, J., Saratchandran, H., & Lucey, S. (2023). On skip connections and normalisation layers in deep optimisation.. In A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, & S. Levine (Eds.), NeurIPS Vol. 36 (pp. 20 pages). Online: Neural information processing systems foundation. Scopus1 |
| 2023 | Ramasinghe, S., MacDonald, L., Farazi, M., Saratchandran, H., & Lucey, S. (2023). How Much does Initialization Affect Generalization?. In ICML'23: Proceedings of the 40th International Conference on Machine Learning Vol. 202 (pp. 28637-28655). Honolulu, Hawaii, USA: Association for Computing Machinery. ACM. Scopus4 |
| Year | Citation |
|---|---|
| 2022 | MacDonald, L. E., Saratchandran, H., Valmadre, J., & Lucey, S. (2022). A global analysis of global optimisation.. |
| Year | Citation |
|---|---|
| 2025 | Albert, P., Zhang, F. Z., Saratchandran, H., Rodriguez-Opazo, C., Hengel, A. V. D., & Abbasnejad, E. (2025). RandLoRA: Full-rank parameter-efficient fine-tuning of large models. |
| 2025 | Hochs, P., & Saratchandran, H. (2025). An equivariant Guillemin trace formula. |
| 2025 | Saratchandran, H., & Lucey, S. (2025). Enhancing Transformers Through Conditioned Embedded Tokens. |
| 2025 | Zheng, J., Li, X., Saratchandran, H., & Lucey, S. (2025). Structured Initialization for Vision Transformers. |
| 2025 | Saratchandran, H., Teney, D., & Lucey, S. (2025). Leaner Transformers: More Heads, Less Depth. |
| 2025 | Shinnick, Z., Jiang, L., Saratchandran, H., Hengel, A. V. D., & Teney, D. (2025). Transformers Pretrained on Procedural Data Contain Modular Structures for Algorithmic Reasoning. |
| 2024 | Saratchandran, H., Zheng, J., Ji, Y., Zhang, W., & Lucey, S. (2024). Rethinking Attention: Polynomial Alternatives to Softmax in Transformers. |
| 2024 | Chng, S. -F., Garg, R., Saratchandran, H., & Lucey, S. (2024). Invertible Neural Warp for NeRF. |
| 2024 | Saratchandran, H., Wang, T. X., & Lucey, S. (2024). Weight Conditioning for Smooth Optimization of Neural Networks. |
| 2024 | Gordon, C., MacDonald, L. E., Saratchandran, H., & Lucey, S. (2024). D'OH: Decoder-Only Random Hypernetworks for Implicit Neural Representations. |
| 2024 | Saratchandran, H., Chng, S. -F., & Lucey, S. (2024). Architectural Strategies for the optimization of Physics-Informed Neural Networks. |
| 2024 | Saratchandran, H., Chng, S. -F., & Lucey, S. (2024). Analyzing the Neural Tangent Kernel of Periodically Activated Coordinate Networks. |
| 2024 | Saratchandran, H., Ramasinghe, S., Shevchenko, V., Long, A., & Lucey, S. (2024). A Sampling Theory Perspective on Activations for Implicit Neural Representations. |
| 2023 | Saratchandran, H., Ch'ng, S. -F., Ramasinghe, S., MacDonald, L. E., & Lucey, S. (2023). Curvature-Aware Training for Coordinate Networks.. |
| Date | Role | Research Topic | Program | Degree Type | Student Load | Student Name |
|---|---|---|---|---|---|---|
| 2025 | Co-Supervisor | 3D applications of Artificial Intelligence | Master of Philosophy | Master | Full Time | Mr Irhas Muhammad Gill |
| 2025 | Co-Supervisor | Secrets of Implicit Neural Representation | Doctor of Philosophy | Doctorate | Full Time | Mr Yiping Ji |
| 2025 | Co-Supervisor | Advancing Signal Modelling with Physics-Informed Neural Networks. | Doctor of Philosophy | Doctorate | Full Time | Mr Ajeendra Panicker |
| 2025 | Co-Supervisor | Implicit Neural Representations-Neural Prior and Beyond | Doctor of Philosophy | Doctorate | Full Time | Mr Mingze Ma |
| 2025 | Co-Supervisor | Secrets of Implicit Neural Representation | Doctor of Philosophy | Doctorate | Full Time | Mr Yiping Ji |
| 2025 | Co-Supervisor | Implicit Neural Representations-Neural Prior and Beyond | Doctor of Philosophy | Doctorate | Full Time | Mr Mingze Ma |
| 2025 | Co-Supervisor | 3D applications of Artificial Intelligence | Master of Philosophy | Master | Full Time | Mr Irhas Muhammad Gill |
| 2025 | Co-Supervisor | Advancing Signal Modelling with Physics-Informed Neural Networks. | Doctor of Philosophy | Doctorate | Full Time | Mr Ajeendra Panicker |
| 2024 | Co-Supervisor | Compressed Representation of Signals using Quantized Implicit Neural Representations | Doctor of Philosophy | Doctorate | Full Time | Mr Cameron Gordon |
| 2024 | Co-Supervisor | Unsupervised Deep Geometry | Doctor of Philosophy | Doctorate | Full Time | Ms Xueqian Li |
| 2024 | Co-Supervisor | Compressed Representation of Signals using Quantized Implicit Neural Representations | Doctor of Philosophy | Doctorate | Full Time | Mr Cameron Gordon |
| 2024 | Co-Supervisor | Unsupervised Deep Geometry | Doctor of Philosophy | Doctorate | Full Time | Ms Xueqian Li |