Grasshopper

algorithmic modeling for Rhino

A one hour test with speech recognition in Grasshopper. It's the first time I've ever used the Windows in-build recognition engine, or indeed invoked it through code. I'd say definitely mixed results.

Views: 693

Comment

You need to be a member of Grasshopper to add comments!

Comment by Luis García Lara on September 18, 2019 at 9:18am

Hi david!, Would you share the code for speech recognition or give some ideas for reconstructed?

Regards

L

Comment by Scott Penman on July 15, 2012 at 12:53am

In combination with Horster reference....could definitely provide pretty interesting interaction.

Comment by Michael Pryor on June 27, 2012 at 7:40pm

I wonder if it can work in 3d space with rhino. http://www.technologyreview.com/view/428350/the-most-important-new-...

Comment by David Rutten on June 27, 2012 at 4:38pm

LeapMotion.... Douglas Adams was right. Soon we'll all be frozen in our chairs in fear of triggering another unwanted motion command.

Comment by Arie-Willem de Jongh on June 27, 2012 at 3:37pm

In combination with this: http://leapmotion.com/ might be a powerfull solution.

Comment by David Rutten on June 27, 2012 at 3:22pm

You'd need a good headset and be alone in a space before it would be even remotely conceivable that it would help. But I can see how it might be an interesting way to deal with an application after a while. One of the worst things a UX designer can force a user to do is move her hand from the mouse to the keyboard and back. You get this with short-cut keys that cannot be grabbed with one hand and also with command entries. Voice commands are certainly one way out of this pit.

Comment by Mateusz Zwierzycki on June 27, 2012 at 2:29pm

I think even if it will work with 100% accuracy, it would not be used. The problem is ergonomy and some mind-connections ;). When you want to do something on your pc, you just sit, take mouse in hand and do it. when you need to speak to computer it just wastes too much energy... you get really tired after all. (on the other hand - remember wall-e ?). I dont say that its useless-we can find some situations where it would be a real advantage (handicapped ?).... and now youll probably say "thank you capt. obvious", so its the time to end this post.

Comment by Ethan on June 27, 2012 at 2:11pm

Early this morning I asked David if all this was possible to try to streamline my workflow... I run xp so I dont have great built in voice recognition. I downloaded a program called my voice controller http://www.5hyphen.com/ and by changing the profile to call up components using this method from david:

If you can have your speech software send the following keyboard presses:

F4 X Enter where X is the alias or name of a component then you can do it.

anyway I set up a test and it works great. now I just have to program my most common components to voice command and Ill never take my hands off my wacom again! THanks David!!!

 

Comment by David Rutten on June 27, 2012 at 2:02pm

I don't know if it would be intuitive or not. First it has to actually work with a fairly high accuracy. I suppose if you can just say a word and have a palette with possible component icons pop up would be faster than double clicking and typing a key phrase.

Comment by Mateusz Zwierzycki on June 27, 2012 at 1:39pm

Would it be intuitive to use speech recognition as components initialization ?

About

Translate

Search

Photos

  • Add Photos
  • View All

Videos

  • Add Videos
  • View All

© 2024   Created by Scott Davidson.   Powered by

Badges  |  Report an Issue  |  Terms of Service