Impulse tracker format questions

Started by TheRealByteraver, March 26, 2019, 16:24:48

Previous topic - Next topic

TheRealByteraver

Thanks for the elaborate answer.
Quote
XM compresses better than uncompressed IT because it uses delta samples, which generally compress better as they reduce the range of the input data (you see the sample principle being applied in the IT sample compression description).
Yes, my player loads .xm files so I am familiar with that, I meant to say that given the same kind of data, the Winrar algorithm works slightly better, which is expected I suppose. I have to dig into itsex.c a bit more before I make further comments on that subject.

QuoteAny IT file written in Impulse Tracker since 1997 or 1998 will use compressed samples unless the author explicitly disabled sample compression. So yes, that's a whole lot of files and I would consider any IT player without sample decompression to be incomplete and broken.
OpenMPT 1.21 is very old by now, so one of the main reason for not enabling enabling IT-compressed samples by default in OpenMPT is getting less and less relevant. I will consider enabling it by default at least for mono samples in the future, maybe also for stereo samples (given that more players have issues with those as there was no tracker that could save them before OpenMPT 1.21).
That's exactly what I needed to know, thank you. I guess it makes sense since the compression is lossless and the algorithm isn't unworldly complex.

That should keep me quiet for a while, especially since I have to work this weekend :(

TheRealByteraver

Hi again, I have another question related to what you explained previously:
QuoteEvery channel has a parent channel field (which is just 0 for master channels). The other way around you just have to walk through all virtual channels to find all of them belonging to a specific master channel.
So I guess each channel (let's say you have a maximum of 256) can be of the logical or virtual variant? Also, you would keep a table that translate a maximum of 64 logical channels to one of the virtual channels (in the 0..255 range)? So that you know which channel should have its state changed from logical to virtual when a new note event happens for that logical channel?

Example. Situation:
- We are processing logical (master) channel #1
- NNA is set to "Continue" for the selected instrument
- Note is already playing in virtual channel #5, which is known by logical channel #1 as its primary (master) channel
- New note event happens
---> Mixer sets "channel-is-virtual" flag for channel #5 (was not set before as it was a master channel)
---> Mixer looks for a free virtual channel, finds inactive virtual channel #23
---> Mixer plays note in virtual channel #23. Channel #23 is now the new master channel for logical channel #1
---> Player is now updated with the information: "the master mixer channel for logical channel #1 is now virtual channel #23"

I'm not sure how you would keep control over each channel otherwise while processing effects (vibrato, volume change etc).

Or is it just so that, if a channel is a master channel, it takes all its info from the logical channel it is derived from (volume, frequency, etc, new note flag / retrig flag...), whereas a virtual channel only takes it own data into account (envelope positions, source instrument, current volume, panning, index in sample data and so on)?

I'm used to logical channels "being in charge". Meaning there is a 1:1 relation between each logical and virtual channel (Like an .IT in sample mode).
It is my impression that in the instrument based .IT system you first update a pattern row, set all kinds of flags (new note event, retrig sample event, etc) and then go through all channels, virtual or not, and update them based on this information. Is that accurate?

Saga Musix

#17
How to keep track of what channel is what is really just an implementation detail so it's not really important how OpenMPT does it. Anyway, what it does is having the first x channels (x being the number of pattern channels) of the mixer representing master channels, and everything after that is implicitly a virtual channel (or an editor channel in case you preview notes through the GUI, but again that's just an irrelevant detail). Most (apart from global) pattern effects only apply to master channels, so all the effect processing happens just like with any other format - in particular this means that the loop that parses pattern channels only goes over the first x mix channels, and as said those are always pattern channels. Things like auto-vibrato or envelopes are applied to both master and virtual channels, and this happens in a separate loop that goes over all channels.
Practically this means that you can only have 256-x virtual channels, so in theory you will have less virtual channels available than possible if, say, you only trigger notes on a single channel but you have a 64-channel module. But this edge case is rather unrealistic so our code doesn't care about it.
» No support, bug reports, feature requests via private messages - they will not be answered. Use the forums and the issue tracker so that everyone can benefit from your post.

TheRealByteraver

You are right, it is a matter of implementation, still it is interesting for me to know how OpenMPT does it. From what I understood from ittech.txt Impulse tracker originally ( < v2.03 or something) really only allocated channels when they were effectively used. It could then happen that a new note event in a master channel would not play if all available channels were already in use. Of course, a different environment (slow hardware) requires a different solution.

What I was brooding over then was essentially, what would be the best way to move a playing note to a background channel in the mixer. But I think I have a better idea now how to do it. Thanks for the reply! As always.

Saga Musix

QuoteOf course, a different environment (slow hardware) requires a different solution.
In fact, this seems more like a way to save memory rather than saving CPU time (I would expect that this solution requires a tiny bit more of CPU time actually).
Also, whatever channel allocation strategy is best may heavily depend on the environment the code runs in. For example, libopenmpt does no dynamic heap allocations during playback, so the maximum number of channels must be known beforehand. This is due to the fact that dynamic heap allocation can be expensive (although allocators are much better than they were, say, 20 years ago) and we do not know which environment the code will be running in. In managed languages this optimization might not be necessary at all and it could be possible to just dynamically allocate more channels when needed. As always, in order to tell if any optimization makes sense, it must be measured.
» No support, bug reports, feature requests via private messages - they will not be answered. Use the forums and the issue tracker so that everyone can benefit from your post.