Speaker 1: Imagine this for a second. You’re driving, maybe trying to change the radio station, but you really need to keep your eyes on the road. Instead, your hand just hovers in the air and you actually feel a shape — maybe a circle — that tells you what you’re doing as you make a gesture.

Speaker 2: Or picture a student who’s visually impaired trying to learn geometry. Their tutor, who could be miles away, can literally draw a triangle right there on their palm, guiding their finger along invisible edges.

Speaker 1: That sounds pretty futuristic, doesn’t it?

Speaker 2: It really does, but it’s not science fiction. Today we’re diving into the fascinating world of mid-air haptics — technology that lets you feel digital things without touching anything physical.

Speaker 1: Our mission is to unpack the science: how do we make these invisible sensations clear and intuitive? We’re focusing on how people perceive geometric shapes in midair.

Speaker 2: We’re drawing on a research paper called Mid-Air Haptic Rendering of 2D Geometric Shapes with a Dynamic Tactile Pointer. This isn’t just cool tech — it’s about solving real-world problems, like safer, more accessible car controls.

Speaker 1: Or revolutionising how visually impaired students learn. Understanding human perception could unlock new ways to interact with technology.


What Is Mid-Air Haptics?

Speaker 1: Break it down for us.

Speaker 2: It’s a technology that creates tactile sensations directly on bare skin in midair — no gloves or attachments. It works using focused ultrasound waves, creating precise points of pressure on your skin, especially palms and fingertips. By rapidly modulating these points, it mimics the feeling of touch, stimulating the same mechanoreceptors as physical contact.

Speaker 1: So, like a mini force field?

Speaker 2: Not quite, but close. And it’s not just lab-bound — Ultraleap commercialised it in 2013. It’s used in art, multimedia, VR, and car interfaces. But unlike physical touch, mid-air haptics lacks natural exploratory cues like pushing, squeezing, or tracing edges, which creates ambiguity in identifying shapes.


The Research Problem

Speaker 1: The question was: how accurately can people identify 2D shapes in midair, and how confident do they feel doing so?

Speaker 2: Researchers tested two approaches:

  • Static — the full shape outline presented at once (like a stamp).
  • Dynamic — a single tactile point traces the shape’s perimeter.

They hypothesised:

  1. Dynamic shapes would be recognised more accurately.
  2. Dynamic shapes would inspire more confidence.

Experiment One: Continuous Shapes

Setup:

  • 34 participants, ages 18–50.
  • Shapes: circle, square, equilateral triangle (6 cm).
  • Dynamic tracing speed: 2 seconds per loop (anticlockwise).
  • Two touch conditions: passive (hand still) and active (hand can move).
  • Task: identify the shape or say “I don’t know,” then rate confidence 1–7.

Findings:

  • Passive touch: Dynamic accuracy (57%) slightly beat static (51%). Confidence was higher for dynamic (median 5 vs 3).
  • Active touch: No significant accuracy difference (dynamic 53%, static 57%), possibly due to movement conflicts. Confidence still higher for dynamic.

Key Issue: Persistent confusion between circles and squares — up to 38% misidentifications.


Experiment Two: Multi-Stroke with Pauses

To improve clarity, researchers introduced H3 — breaking shapes into discrete strokes with pauses at corners (“chunking”).

  • Optimal pauses: 300 ms for squares, 467 ms for triangles.
  • Shapes rendered in multiple strokes; circles remained continuous.
  • “I don’t know” removed to study confusion patterns.

Results:

  • Passive touch: Accuracy jumped from ~51% (static) to 83% (dynamic). Circle–square confusion dropped sharply.
  • Active touch: Accuracy rose from ~57% (static) to 85% (dynamic).
  • Confidence remained significantly higher for dynamic in all cases.

Feedback: Participants described static shapes as “blurry” and dynamic multi-stroke shapes as “clear” or “sharp.” Many used pauses to count corners and identify polygons.


Takeaways

  1. Dynamic tactile pointers outperform static for confidence and (with still hands) accuracy.
  2. Multi-stroke with pauses dramatically improves accuracy for polygons — ~30% gains.
  3. Timing matters — ~300 ms for squares, ~467 ms for triangles (at 6 cm size).

Applications:

  • Safer in-car controls (gestures with distinct haptic shapes).
  • Remote geometry teaching for visually impaired students.
  • Highlighting parts of tactile diagrams.

Next Steps:

  • Explore different shape sizes, speeds, and more complex or similar shapes (e.g., oval vs circle).

Speaker 1: This shows that with careful design — especially dynamic multi-stroke feedback with optimised pauses — we can dramatically improve our ability to identify shapes by touch alone.

Speaker 2: And as mid-air haptics evolves, it could transform how we interact with digital and physical worlds, especially for those who rely on touch over vision.