The Basics

  • The target audience of SeeSign is beginning signed language students that are still novice language users. Their basic knowledge will help them understand the parameters used in the application, but their minimal vocabulary skills will make SeeSign useful for them.
  • SeeSign is not a language dictionary. It does not provide definitions of words. It is a signed language recognition search engine.
  • SeeSign helps users search for and identify signs by identifying some of the key parameters of the language that are recognized by Signed Language Linguists. The parameters used are:
      • Handshape
      • Location
      • Movement
      • Point of Contact
      • Number of Hands
  • SeeSign currently works best on screens that have a landscape orientation (phones and tablets). This is due to the design of the Nomen Project website.

Background – I saw a problem and thought to myself, how can I use what I’ve learned to help others?

SeeSign is an application idea that was first born my freshman year of college while taking Introduction to Signed Language. My professor mentioned that it was easy to look up an English word in American Signed Language (ASL). You can go through any alphabetically sorted ASL dictionary or Google “what’s the sign for ___?” and quickly find the sign you are looking for. However, it’s impossible to do the reverse lookup! If you see a sign you don’t know, and you remember how to replicate exactly what it looks like, there’s no way to “search” what that sign is. You can’t Google or find in a book a sign that you have no English equivalent for. The only way to figure out the English meaning for an unknown sign is by asking another human being. This problem is especially relevant to beginning signed language students as they are just only starting to build their vocabulary in American Signed Language.

With the above problem in mind, I decided I wanted to be the one to create that “reverse search engine.” However, I did not have a programming background or the means to create this kind of complex application. Then, in my fourth year at the University of New Mexico, I took a class through the Honors College called Things That Make Us Smart. I proposed my application idea to my professor, and he knew the perfect web based platform that I could use to build SeeSign with no programming experience needed. That first semester I spent my time researching others’ past attempts to create an ASL to English search engine and developing the first prototype of my application. Part of the course requirements were that I blog about my thought process and steps I took; so if you would like to know more about those initial details, I encourage you to take a look at my blog.

Looking Forward

SeeSign first started becoming a reality in August 2015. It has come a long way from only being a freshman student’s brainchild. However, what is seen now is limited compared to where SeeSign is headed. Currently, the database has 10 signs; you can expect the database of recognizable signs to expand, nearly indefinitely, to match words in the English language. The current movement category only has descriptive words; in the future, you can expect a visual image to accompany the descriptor. The current model of the v3 app is a signed language student; you can expect a local deaf model, with an expansive knowledge of ASL, to take her place in later versions.

Currently, only a small handful of people know about SeeSign and how it works. As I spread the word and demo the app, I will make revisions as necessary to make it as user friendly as possible. The potential benefits that this application can provide signed language students are extensive. There is much more to come from SeeSign.

One thought on “About

Comments are closed.