Grasshopper

algorithmic modeling for Rhino

As we've begun to share definitions within our office, we've found it a challenge to have more than one person interact physically with a definition and thought maybe we just need more space.  We're using a rear projection piece of glass as our Grasshopper canvas and the Rhino environment projected on the wall.  That alone is great but it also looks like multi touch could have a promising future in Grasshopper.

Follow the link for more info:
http://lmnts.lmnarchitects.com/interaction/ghcanvas-real-estate/



Views: 1016

Replies to This Discussion

Impressive, beats my 10 inch screen netbook handsdown....
Thanks. We're still trying to figure out the right set of gestures, but it seems like GH is really amenable to multitouch interaction...probably not a big surprise to most people on this site.
Wow.

What sort of thing are you looking to do at the same time with multi-touch? Sketching? Connecting wires? Dragging sliders?

--
David Rutten
david@mcneel.com
Seattle, WA
While we've just begun to mock this up with a single interface point (the pen), we're starting to think of gestures that might work without changing the way GH works. Zooming while dragging/connecting wires is feasible. Zooming will probably be the first use, just translating a standard zoom gesture (two fingers moving apart, or double-tap to zoom to a location) into mouse event. However, there are probably some really good gestures (imploding/exploding clusters) that would require some changes. The question is: which gestures?

It seems like distinguishing between users isn't feasible, and probably not necessary. There's such a sordid history of that in CSCW anyway. Social coding (in the sense of "You work on that part over there, I'll work on this part over here") is probably not a good idea. However, collaborating over the same problem (taking turns) is worth exploring.
This looks amazing...

As far as social coding goes, having a function like xref from Autocad within GH canvas could be another way around.
I could be wrong but I recall David once mentioned somewhere in the one of discussions that they are looking into it...
Thanks, Taka.

I'm not sure I understand how xref (2D line work or geometry?) within the GH canvas would help with collaboration on scripting. Please explain what you have in mind...
Probably clusters in the future will offer that functionality. Some people could be working on different files that are linked together as clusters on a main file.

"Social coding"? i don't think that's the right term, unless you are using grasshopper to share pictures of your holidays.
Vincent:
You're right, "social coding" is the wrong term. "Collaborative scripting" is a better characterization of what we're exploring (in general).

More specifically, we're interested in what sorts of behaviors (gestures for the moment) are most amenable to working on a spatial canvas like GH.
Besides touch screen interface, I see you also have the spacenavigator (for manipulating the viewport, i guess).

Maybe what's actually happening is that you guys are suffering from Musophobia :)
You might be right because I almost always use the trackpad on my laptop.
The space-navigator is a stand-in at the moment for something better. It's a good stand-in though.

Musophobia :) No, we're just tired of mice ;) Those issues aside, control of a mouse in a collaborative setting is a strange social phenomena.

RSS

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service