New user to OpenMPT who is blind

Started by sethmhur, July 31, 2017, 18:25:46

Previous topic - Next topic

Diamond

Interestingly, the new test build does in fact speak the data when I initially switch to the "Patterns" tab.  This is a good start.  I tested with both JAWS and NVDA.  It would be useful if the data could be spoken as you navigate through the pattern using Arrow keys, Tab, and so on, but I'm not quite sure how to go about it since my guess is that the data is sent through the Windows' Accessibility API and is not actually displayed anywhere on the screen.

Saga Musix

That is correct, Windows queries this data from OpenMPT. It might be possible to instruct the screen reader to automatically speak out this information, but as far as I'm aware there should also be a way in most screen readers to force them to re-read the content of the focussed window?
» No support, bug reports, feature requests via private messages - they will not be answered. Use the forums and the issue tracker so that everyone can benefit from your post.

Diamond

I'll have to think about that.  It might be possible using some scripting, but again this will be different for each screen reader.  It may be beyond my knowledge of JAWS scripting.  I will have to investigate.

Saga Musix

Are you sure you need to use scripting for that? In Narrator, you can simply press Ctrl+Shift+Return. It's not as comfortable as automatically reading it out, but it seems usable.
» No support, bug reports, feature requests via private messages - they will not be answered. Use the forums and the issue tracker so that everyone can benefit from your post.

Diamond

INSERT+TAB for the "Say Window Prompt in Text" command seems to work in JAWS.  Perhaps that might allow for some simpler scripting to make it work with navigation keys.  I don't use NVDA much so I'm not sure if it has a similar command, but I will try to find out.

Diamond

Sigh.  Looks like I spoke too soon.  INSERT+TAB does repeat the information, but it repeats the data initially obtained when the tab gains focus.  The buffer JAWS is using to store the data is not refreshed as you navigate through the pattern.  My guess is because when the JAWS developers wrote the script that handles this functionality, they were not expecting the data to change while the window still has focus.

Diamond

So after some experimenting, I discovered that sending a WM_SETFOCUS message to OpenMPT causes JAWS to refresh and speak the data in it's buffer without having to switch focus away from and then back to the "Patterns" tab.  Insert+Tab seems to work for reviewing the data in NVDA as well and no additional steps are necessary to refresh the buffer.  Although it does seem to unfortunately speak some extraneous data such as all the channel numbers displayed on the screen, but I suspect this is a quirk of NVDA and has nothing to do with OpenMPT.

Saga Musix

» No support, bug reports, feature requests via private messages - they will not be answered. Use the forums and the issue tracker so that everyone can benefit from your post.

Diamond

Yes indeed.  Thanks for adding this.  It works better than the previous method I was using of reading the data from the status bar.

Saga Musix

Do you think the format of the accessibility description should be user-definable (through a hidden setting)? That way, if someone doesn't need the row information, they could omit it, or if someone wants the pattern number, they could include it, etc...
» No support, bug reports, feature requests via private messages - they will not be answered. Use the forums and the issue tracker so that everyone can benefit from your post.

Diamond

I hadn't actually thought of that, but I can see how it might be useful.

Saga Musix

#26
There is now a configurable setting, Pattern Editor.AccessibilityFormat (full description is in the linked wiki article). The column type is now also adjusted for PC events (e.g. "volume" becomes "plugin parameter" instead).

Edit: And as of r8704, there is now also an accessibility description of the currently selected envelope point in the instrument editor!

Diamond: Do you think it would be worthwile to have a wiki page with tips and tricks for visually impaired and blind users? It could also be included in the manual that way.
» No support, bug reports, feature requests via private messages - they will not be answered. Use the forums and the issue tracker so that everyone can benefit from your post.

musicalman

#27
Whoa, a lot going on here! Have a lot to say but don't have much time at the moment to give this proper attention, as I'll soon be leaving home for the weekend. So forgive me if I don't sound too informed. I'm new to tracking in general so all of this is a lot to take in. But I've been wanting to learn.

Firstly to the OP, I'd also be willing to help you with learning Open MPT and the like. While I can't yet write a piece to save my life, I did mess around with a lot of different things and know how they work (pattern editor, sample/instrument editor, envelope editor, etc.) and could at least describe what the basic keys do.

Primarily, I use NVDA as a screen reader, though I do have Jaws and also am on Windows 10 so I can use Narrator as well. Just tested accessibility features and they work as advertised in the pattern editor, and it tries in the instrument editor as well. Pretty awesome start! As others have said it would be very useful to be able to get info in realtime as you press navigation keys, or to somehow keep track of where your selecting since that is kind of confusing without speech. But I won't complain so long as the implementation makes things easier than they currently are.

I know nothing about programming, but perhaps something that works similar to Osara might help. Osara is an add-on for the popular Reaper DAW which works best with NVDA, and somewhat with Jaws in my experience, not sure about Narrator but I'm pretty sure it works well there too, and I'm almost certain it uses MSAA like this does. It does change the keyboard mapping and adds some custom actions to Reaper, but that's not important here. What's relevant is that it interfaces with Reaper somehow to give feedback on what it's doing. Osara has never failed me, at least not from within NVDA. I'm fairly certain that the devs of Osara would be willing to help with ideas on how to implement such things into Open MPT if you run into roadblocks.

I'm of the opinion that Jaws support is a lot of extra trouble since it doesn't respond as well to MSAA, though it does to some extent. Still, NvDA and Narator seem less clunky about that, especially the former in my opinion, and even when I used Jaws full time I still had NvDA installed and would switch out for cases exactly like this. So if Jaws support ends up being a little finicky, it's probably not a huge deal to most people as they can use one of the other screen readers.

One thing not yet mentioned: the sample mapping dialog is really difficult with a screen reader.  I've sort of figured it out but it isn't possible to do things with the keyboard directly, so I have to use the mouse simulation functions of my screen reader to click on things. I get scared every time I try to go in there to make multisampled instruments, and I'm probably not even doing it the way it's intended to be done. I can't really suggest how to make it more accessible at this point but would be willing to test and try things. After tomorrow night I should be more free.

Last but not least... maybe this'll help devs of Famitracker and other trackers which have extensive keyboard navigation already to implement screen reader support? Just throwing stuff around at this point but couldn't resist. most of you guys are more active in this community so may have connections and be able to make it more likely to happen :)

Edit: actually tested and removed some questions I had before testing

Saga Musix

#28
There are plans to add a scripting API to OpenMPT (if all goes will, a first version of the scripting API will be present in OpenMPT 1.28), which would help with extending OpenMPT in many ways, also for screen readers. I think this way blind users could potentially have more control over what kind of information they want to get from their screen reader. Something similar to Osara could probably be built then.
I'm not a huge fan of "look at this other piece of software and how it does things", but if you can name some specific things that you think would be great to have in OpenMPT as well, please mention them.

QuoteOne thing not yet mentioned: the sample mapping dialog is really difficult with a screen reader.
There has also been criticism from sighted users, so there are plans to redo the dialog at some point. When doing so, I will keep accessibility in mind.
OpenMPT's current test builds (version 1.27) add support for the SFZ instrument format, which is entirely text-driven, so maybe this would help you with defining instruments and then later refining them in OpenMPT.
» No support, bug reports, feature requests via private messages - they will not be answered. Use the forums and the issue tracker so that everyone can benefit from your post.

musicalman

Yeah, I can see why you wouldn't like to take the "hey let's see how it works and do it that way" approach. Was just thinking that if you had no clue where to go to improve screen reader support, something like Osarra would be the best example of integrating such support and you could get a better concept of how it might be accomplished from a developer standpoint. It sounds like you already do have that though, so I really have high hopes.

And I saw somewhere about SFZ support but didn't have time to look into it. It's a format I use on a regular basis so this is good news! I'm assuming only root key, lokey and hikey and perhaps loop definition are actually supported? If so this is enough, I don't see how you could add much else  given the nature of lfos and envelopes in midi vs Open MPT. I'll try this when I get home, as well as taking another look at envelopes now that they speak.