Category: Uncategorized

  • Concept: “Passion economy” platform for poetry

    Could more people make money sharing their poetry?

  • Concept: Platform for reviewing poetry, intellectual conversations, etc.

    [I don’t remember where I was headed with this one, but I think it gets developed in later proposals.]

  • Concept: Visualizing Intersectionality

    If we could see all the intersections, down to the point of very small groups (5-10 people), what would it look like?

    If each category had two values uniformly distributed, then 29 categories would be more than enough to categorize each American individually.

    How many categories non-uniformly distributed will it take?

    A visualization for a few categories: https://cran.r-project.org/web/packages/nVennR/vignettes/nVennR.html

    We would need something different for higher dimensions!

    The data to answer this question might not exist.

  • Concept: MoNoMoDate

    Ex-Mormon Dating App

    A dating app for a very specific niche

  • Concept: Custom View Windows

    Replace a window with a digital display, put one or more cameras on the back of it, then apply transformations to the outdoor view: make a rainy day sunny, make a short day longer, or replace a brick wall with a stunning view.

    Difficulties:

    • parallax: the illusion is ruined as you move around the room unless the display tracks your location, a la Disney/Lucasfilm’s “Stagecraft”. And I bet that’s patented.
    • further, even Stagecraft can’t alter the view for different people separately

    It seems like what you really need is a smart hologram? Something that can alter or override the transmission of light at all angles. Something that did that would not need to track the viewers’ locations. It would also massively complicate the computational task of updating the view.

    Looking Glass Factory has a holographic display that might work: https://www.youtube.com/watch?v=EMUdmE0lKIU

    Though it’s not currently large enough for an entire window, at least not a very large one.

    Light Field Lab also seems to be working on something similar

  • Concept: Play Reading UI

    Follows along in the text as a group reads or performs a play.

  • Concept: Voice-to-instrument

    Use a CycleGAN-like approach to transform scat-style singing into a musical instrument solo or into entire songs.

    Compare to TimbreTron

  • Concept: Novelist

    A system for novel-writing. Mostly just lets people set variables and intelligently refer to them within the text. This would require localization-like ability to adjust the case and number of the variables. Could provide a user interface that makes this easier for certain classes, e.g. Character, Location, Event, Relationship, etc.

    I’ve been using a combination of Gramps, Kate, and a Python templating engine called Chevron, plus a JSON data file, to achieve much of this effect. But Gramps and the JSON file have the potential to come into conflict.

    Due to the GPL2+ license of Gramps, it would be difficult to make use of it directly. If we could read the Gramps database we could leverage that, but it would be nicer to have an integrated solution.

    Rust + Azul GUI toolkit might be a good option to build the data processing and the UI in one place.

  • WIP: Upgrade Yakkity

    Yakkity uses a non-ML algorithm which produces mediocre results. Improving the algorithm is the top priority. An easy win would be to train the weights by which it ranks candidates. This would require only labeling a few hundred candidates and performing least squares regression. A harder win would be to make a generative neural network algorithm to produce novel mondegreens. I’m not even sure if that would be better—the need for nearness to the pronunciation of the original sequence is a fairly tight constraint. But who knows—maybe that constraint is unnecessarily tight and is preventing more creative puzzles? Also, the ability to coin new words would be nice.

    Second priority: Yakkity visuals. This is something I would probably do only if the algorithm got up to snuff.

  • Concept: NES-to-SNES translator

    Gather a bunch of SNES images. Downscale them to the NES resolution. Train a de-convolutional (transposed convolutional) network to translate the NES scale to the SNES scale. Then apply this network to NES games to get a SNES-style upsample. If necessary, embed this in a GAN until the upsampled NES games are indistinguishable from the SNES games.

    Actually, the resolutions are either identical or very similar, so this is a matter of style. It could work to do a GAN that translates NES style to SNES style, the discriminator guessing whether it is NES or SNES.