To me it seems that with proper tweaking any display can be synced in x4,x5 modes with slightly less accurate samplerate, unless of course the display doesn’t like the Total.H.pixel.count in the first place.
This was a hypothesis I had for a while, attempting to ”chase the pixel clock”. I even made a spreadsheet to more easily calculate what samplerate should be to match certain VESA & CEA pixel clocks for various resolutions.
However, my conclusion is that it doesn’t really work like this.
An equally (or perhaps rather more) important factor is the line count, which when different to the standard specs also equates to off-spec horizontal frequency, i.e. it’s not as much about total pixels or vertical refresh, but how fast each line is drawn. Especially CEA 720p and 1080p has sigificantly fewer lines than VESA PC modes, which is probably why the 256-vertical tweak has seen some success.
Some clarification to earlier post: scaling is actually misleading term, line-multiplication is more proper. However, it’s also different for generic and optimized modes.
Generic mode simply samples the analog signal at the full rate which matches the final vertically multiplied target.
Optimized, on the other hand, samples at only the ”base level” i.e. 1x, and set to match the dot rate of a console. In this case, it actually is scaled horizontally, by using pixel repetition.
Interestingly, the pixel repetition can be set to different factor for active vs total portion of signal. E.g., for 256×240 4:3 Lx3 mode, H active area is multiplied (pixels repeated) 4 times, while total area is multiplied 5 times.
As I interpret this, this way the active window (as seen by display) and total samplerate conforms more closely to the widescreen format of 720p. In effect, the OSSC creates a “fake” active window of 1280 length, which is only filled by 1024 by the original active signal (“close” to 4:3 aspect). In this way the signal (originally 4:3) is manipulated to output something more similar to a 16:9 signal (the actual picture area is compressed in relation to the total length). This is explained in the “optimal timings” page on the OSSC wiki. (Tbh, this section of the wiki could probably be expanded/clarified even more.)
Example: optimized 256×240 4:3 Lx3 mode
Incoming dot rate, sampled at: 341
Outgoing 341×5 = 1705 (close to 1650 of 720p)
Outgoing H active 256×5 = 1280 (ie resulting active area 1280×720, as expected for 720p)
Actual picture in H active 256×4 = 1024 (resulting display AR 1024/720 = 1,42)
EDIT: was a bit confused regarding the pixel repetition in original reply, and have added an example of how I interpret it to work. I would be happy if someone corrected me on this, because I am not sure I’m understanding it correctly myself.