Hey!

I'm Reza Amini Gougeh, Ph.D. student at McGill university. Here I share my ideas, thoughts, and published papers and discuss them in-depth.

HCI fascinates me, especially when it intertwines with Machine Learning. My goal is not just grow professionally, but also to help shape a future where our interactions with technology feel natural and empowering.

You can find me on:

Check out my ResearchGate and Google Scholar!

My Work Experiences

This diagram presents a unique snapshot of my career, emphasizing the diverse skillsets I’ve cultivated and the experiences I’ve amassed over the years. Each flow represents an area of expertise or experience, beginning from the institutions where I gained these skills and leading to the specific areas of focus.

Virtual Reality (VR)

Virtual reality is a fascinating field that lies at the intersection of technology, design, and human perception. My work in VR has spanned various aspects, including creating immersive experiences, developing games, and exploring therapeutic applications. One of my primary interests in this field is understanding and optimizing the user experience in VR environments. This involves not only technical and design skills, but also a deep understanding of human psychology and sensory processing. Furthermore, I’ve worked extensively on multisensory integration in immersive experiences, exploring how we can leverage our understanding of human senses to create more engaging and realistic VR experiences. My involvement with the Cyberpsychology Lab of Université du Québec at Outaouais (UQO) and the East Azerbaijan National Elite Foundation exemplify my work in this field.

Machine Learning (ML)

Machine Learning is a powerful tool that has revolutionized countless industries, from tech to healthcare, finance, and beyond. My expertise in ML ranges from traditional techniques such as regression, classification, and clustering to advanced areas like deep learning, neural networks, and dimensionality reduction. I have also delved into the realm of multi-modal ML and data fusion concepts, which involve integrating information from multiple sources or types of data. Additionally, I have experience with adversarial attacks and interpretable ML, which are crucial for building secure, robust, and transparent ML systems. My work at DREAM BIG Lab, Lady Davis Institute at JGH has allowed me to apply these skills in real-world projects.

Data Science

Data science is the backbone of informed decision-making in businesses and organizations. My proficiency in data science encompasses visualization, statistical analysis, and data cleaning and preparation. Visualization and statistical analysis enable me to uncover insights from data and communicate them effectively. Meanwhile, proficiency in data cleaning and preparation ensures that the data feeding into ML models or analysis tools is accurate and relevant. My work at LDI is a testament to my expertise in Data Science.

Physiological Computing

Physiological computing involves using technology to interact with human biological systems. My work in this field has covered a variety of aspects, including biofeedback, wearable technology, human-computer interaction, health monitoring, and multi-modal signal processing. Biofeedback and health monitoring involve interpreting signals from the body to understand a person’s physiological state, while human-computer interaction and wearable technology focus on how humans can interact with technology in a seamless, intuitive way. Lastly, multi-modal signal processing, a crucial part of physiological computing, involves integrating and interpreting multiple types of physiological signals. My experience at the BCI Lab and MuSAE Lab reflects my commitment to this field.

Learn more about my startup journey: Diabeatify

At Diabeatify, we’re all about fostering community connections, offering a streamlined platform for easy diabetes management, and ensuring you’re always in the loop with the latest in diabetes care.

“Talk” with AI

Chit Chat Charm AI

Dive deep into the world of AI-assisted conversations like you’ve never seen before. At Chit Chat Charm, we offer not just text-based interactions but a full-fledged experience with visual and auditory immersion. Get ready to be charmed! Dive deep into the world of AI-assisted conversations like you’ve never seen before. At Chit Chat Charm, we offer not just text-based interactions but a full-fledged experience with visual and auditory immersion. Get ready to be charmed!

WebpagePlay Game Kickstarter

Latest News

Chit Chat Charm ^__^

“Talk” with AI … Dive deep into the world of AI-assisted conversations like you’ve never seen before. At Chit Chat Charm, we offer not just text-based interactions but a full-fledged[…]

Read more

What is the Google Project Management Professional Certificate program?

The Google Project Management Professional Certificate program is designed to provide learners with a comprehensive understanding of project management methodologies and tools. It covers a range of topics, including project[…]

Read more

Attending to Scientist2Entrepreneur (S2E) Program! Finding my way into the world of startups!

During the last couple of months, I have been reading literature and gathering information on gaps that we have in the intersection of health, VR, and ML-AI. I came up[…]

Read more

Multisensory VR Experiences: My M.Sc. Journey

Our first paper: Review of systems that integrated VR with wearables for stroke rehabilitation!

Head-Mounted Display-Based Virtual Reality and Physiological Computing for Stroke Rehabilitation: A Systematic Review

Then we attempted to see what is the impact of multisensory VR on QoE subscales (e.g., immersion, realism, engagement):

QoMEX’ 2022: Multisensory Immersive Experiences: A Pilot Study on Subjective and Instrumental Human Influential Factors AssessmentMetroXRaine: Quantifying User Behaviour in Multisensory Immersive Experiences

Then we proposed a multisensory VR training paradigm! However, its papers are under review. We investigated its QoE aspects in a paper

submitted to the “quality and user experience” journal. Plus, the effects on MI-BCI performance are reported in a Frontiers journal! I hope they get accepted and published as soon as possible!

Towards Instrumental Quality Assessment of Multisensory Immersive Experiences Using a Biosensor-Equipped Head-Mounted DisplayEnhancing Motor Imagery Efficacy Using Multisensory Virtual Reality Training

Learn more about Oranges V2: An experiment with scents and force feedback!

Contributions

The organizations I have contributed

OpenLab at UHN

Traumas cote-nord

Lady Davis Institute at JGH

Cyberpsychology Lab of Université du Québec at Outaouais (UQO)

DREAM BIG Lab

Check out my IG! Where I share my main hobby, photography!

Other Projects

An application of AI in Medicine

Online COVID-19 Diagnoser

An outbreak of SARS-CoV-2 shocked healthcare systems around the world. It began in December 2019 in Wuhan, China, and spread out in over 120 countries in less than three months. Imaging technologies helped in COVID-19 fast and reliable diagnosis. CT-Scan and X-ray imaging are popular methods. This study is focused on X-ray imaging, concerning limitations in small cities to access CT-Scan and its costs. Using deep learning models helps to diagnose precisely and quickly. We aimed to design an online system based on deep learning, which reports lung engagement with the disease, patient status, and therapeutic guidelines. Our objective was to relieve pressure on radiologists and minimize the interval between imaging and diagnosing. VGG19, VGG16, InceptionV3, and ResNet50 were evaluated to be considered as the main code of the online diagnosing system. VGG16, with 98.92% accuracy, achieved the best score. VGG19 performed quite similarly to VGG16. VGG19, InceptionV3 and ResNet50 obtained 98.90, 71.79 and 28.27 subsequently.

About me…

I’m Reza Amini Gougeh, a interdisciplinary engineer with a B.Sc. in Biomedical Engineering from the University of Tabriz, Iran, and an M.Sc in Telecommunication from INRS, where I worked in MuSAE Lab in Montreal, QC.

My journey into programming began at the age of 16 when I delved into front-end design using HTML and CSS. My thirst for knowledge led me to explore C and C++, which further solidified my interest in the world of programming. In university, I faced challenging problems that required efficient and fast data analysis, drawing me towards Python and MATLAB. In 2018, I delved into the fascinating world of Brain-Computer Interface (BCI) systems. This newfound passion for neuroscience and technology led to my involvement in a BCI Lab, where I extensively researched and developed in this field. Recognizing the potential impact of my work, Iran’s National Elites Foundation invited me to contribute to a VR project aimed at improving 3D-object comprehension in elementary students. This venture allowed me to apply my technical skills in a practical, impactful way, contributing to the well-being of individuals living with dementia, in a collaboration with speech Therapy departmnent of Tabriz University of Medical Sciences. Joining the MuSAE Lab was a pivotal moment in my career. The supportive, knowledgeable team around me created an environment that fostered productivity and learning. It was here that my interest in machine learning and artificial intelligence truly took flight. As a Machine Leaqrning Engineer in the DREAM BIG research project since June 2021, I have been able to implement and further refine my skills in these areas, with a particular focus on mental health applications. Today, my work centers around machine learning and data science. I’ve developed a profound understanding of dimensionality reduction, including feature selection, projection, and extraction. My expertise also extends to various aspects of machine learning, such as regression, classification, clustering, deep learning, and neural networks. I am particularly adept at interpreting machine learning models, ensuring their fairness and robustness. In the realm of data science, I excel in visualization, statistical analysis, and data cleaning and preparation. In addition, I maintain a keen interest in physiological computing. I am exploring the intersections of biofeedback, wearable technology, human-computer interaction, health monitoring, and multi-modal signal processing. By synthesizing these diverse fields of study, I aim to push the boundaries of what’s possible in healthcare and improve outcomes on a global scale. I look forward to the new challenges and opportunities that lie ahead in my journey.