Skip to content
View well-iam's full-sized avatar
๐Ÿฅ˜
What's cookin' RPL? Spring 2026 Edition
๐Ÿฅ˜
What's cookin' RPL? Spring 2026 Edition

Block or report well-iam

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
well-iam/README.md

A herd of ANYmal-C robots bounding in Isaac Lab.

Hi! I'm William Notaro ๐Ÿ‘‹

Robotics & Embodied AI

Visiting Student Researcher at KTH Royal Institute of Technology | Honors Student at Scuola Superiore Meridionale

Email LinkedIn


๐Ÿš€ About Me

I'm a Masterโ€™s student in Automation Engineering and Robotics bridging the gap between Generative AI and Safety.

  • ๐ŸŽฏ Currently working on: Imitation Learning for Bimanual Mobile Manipulation at KTH.
  • ๐Ÿ”ญ Research focus: Foundation Models for Robotics, Sim-to-Real Transfer, Formal Verification.

๐Ÿ”ฌ Featured Research & Projects

๐Ÿค– A VLM-based Control Framework with Plan Verification - (repo)

MSc Thesis Project | Supervisor: Prof. A. Finzi

The Challenge: Enabling robots to handle open-world instructions ("Pick up the object that looks like...") while ensuring safety.

  • Method: Integrated Vision-Language Models for semantic reasoning with PDDL (Planning Domain Definition Language) for formal plan verification.
  • Result: Deployed on a real Franka Research 3. Achieved <4mm reprojection error in grounding.
  • Tech: ROS 2, Python, PDDL, VLM (Gemini Robotics-ER 1.5).

Tip

Check the Full Video Demo: YouTube Video


๐Ÿ• Dynamic Gait Learning via Reinforcement Learning - (repo)

Advanced Robotics Project | 2025

The Challenge: Training a quadruped robot to execute dynamic gaits in simulation.

  • Method: Utilized Isaac Lab (NVIDIA) and Reinforcement Learning (PPO) to train a locomotion policy.
  • Result: Achieved stable bounding gait at 3 m/s on ANYmal-C model with 42 min estimated autonomy.
  • Tech: NVIDIA Isaac Lab, PyTorch, RL.

test_text??


๐Ÿฆพ Imitation Learning for Bimanual Mobile Manipulation

Visiting Researcher @ KTH | Present

The Challenge: Coordinating the Unitree G1 Humanoid for Deformable Object Manipulation.

  • Method: Investigating Hybrid Action Spaces in Imitation Learning to solve long-horizon manipulation tasks.
  • Context: Working under the supervision of Prof. Danica Kragic.


๐Ÿ› ๏ธ Technical Stack

Area Tools & Frameworks
Robotics Middleware ROS2 Gazebo
AI & Learning PyTorch TensorFlow Isaac Lab
Languages C++ Python MATLAB
DevOps & Tools Linux Docker

Last update: Feb 2026 | Based in Stockholm, Sweden ๐Ÿ‡ธ๐Ÿ‡ช

Pinned Loading

  1. vlm-pddl-manipulation vlm-pddl-manipulation Public

    Neuro-Symbolic VLM Agent for Zero-Shot Robotic Manipulation on Franka Emika (FR3). Integrates VLM (Gemini Robotics ER-1.5) reasoning with PDDL verification for safe, long-horizon planning in ROS2.

    C++

  2. biologically-inspired-anymal biologically-inspired-anymal Public

    Inspired by the bounding mechanics of a biological counterpart (Kumba), this project engineers a dynamic gait for ANYmal-C using NVIDIA Isaac Lab, optimized for limited-compute hardware.

    Python

  3. htn-pick-place-blocksworld htn-pick-place-blocksworld Public

    This project aims to solve the classic "Blocksworld" problem within a realistic simulated environment, bridging the gap between high-level symbolic planning and low-level motor execution. The systeโ€ฆ

    C++