Home Artists Posts Import Register

Content

Hi everyone,

some of you might have seen my post on twitter or already tried the 480i to 480p hack with a testbuild. Today I want to tell you how 480i rendering works in the PSX and what this hack can do.

First we must understand the VRAM organization of the PSX.

The VRAM is a nice flat memory with 1024 pixels width and 512 pixels height, each pixel being 16 Bit in color. This is fixed, which means, a game cannot change it to anything else for normal rendering. 

(Yes, there is a 24Bit color mode for e.g. FMVs, but let us ignore that as it cannot be used for normal rendering)

The VRAM structure is one of the easiest things to understand in the whole system. So easy, we can just display it in a picture:

The picture shows the whole VRAM content.

There is a framebuffer containing the image to display on screen. I marked it within a red rectangle. This covers most of the VRAM, as we are in Interlaced rendering here with 640x480 pixels screen size.

Additionally there is space for textures and color palettes that can also be clearly seen.

The situation is very special for interlaced rendering, because it only has one framebuffer. This is quite problematic as we will see, because the framebuffer is redrawn at the same time it is displayed on the screen.

In comparison, there are games with 2 framebuffers in 240p mode:

These games can draw to one framebuffer and display the other one at the same time. This makes rendering much easier, as there is no risk of rendering areas that are currently displayed on screen.

It still has a problem: when the current frame is fully rendered, it cannot start rendering the next frame, because it is still displayed.

The developers have to choose if they wait with the buffer swap for vsync or if they switch over instantly. This instant switching will lead to screen tearing, as the content drawn to the screen will switch from the old framebuffer to the new framebuffer mid-frame. Waiting for vsync on the other hand will not allow for arbitrary framerates. In NTSC you could then only render at 60, 30, 20 or 15 fps, but not at 25.

To solve these problems, 3 framebuffers can be used:

The difference is very obvious: each of the framebuffers is much smaller. 

While Re-Volt is running in 640x240, Jumping Flash only runs in 256x240. You can see that a third framebuffer wouldn't fit in the VRAM for Re-Volt, so the game would have to sacrifice the resolution and therefore image quality.


This leads us back to the start with the Interlaced rendering.

The BIOS screen has a resolution of 640x480, exactly the same size as both framebuffers of Re-Volt together. It's also clear here that the VRAM could not fit another framebuffer.

Instead, a trick is used to divide this single framebuffer into two:

You can see it in this image. It looks like scanlines, but it's not. The BIOS just cleared the "second framebuffer".

Interlaced mode will still draw 240 lines(NTSC) to the screen every frame, not 480. With the first frame presented on the screen you will see all odd lines and in the next frame all even lines. So if the odd lines are drawn to the screen, we are free to render the even lines in the background and vice versa. This has some side effects and requirements.

The most important is that rendering must be finished before vsync, otherwise the half rendered image would be visible for the user. It could look like this:

With that in mind, we understand that each frame must be drawn in the timeframe of displaying one frame. Or in other words: games always run with the same frames per second as they have the refresh rate. 480i games on NTSC are always running with 60fps, because the "buffer swap" from odd to even lines happens automatically. If the rendering speed would dip below 60 fps, the game either has do some very special tricks or it would display half rendered frames.

Another side effect is, that the game cannot use framebuffer effects where parts of the framebuffer would be used as a texture, because the image is sliced.

Finally, we have the big issue of recombining consecutive frames rendering different lines into one image. On a CRT, this happens with the beam and afterglow, but on a modern display we would have to deinterlace the image to get rid of combining artifacts.

When using the most common "Weave" mode, two images are just combined by stacking the odd and even lines next to each other:

You can clearly see the horizontal lines in this image next to moving objects. This is because the content in the image changes from the odd to the even lines.

These combining artifacts are nothing else than screen tearing. Instead of one tearing line with the old image above and the new image below, we get 240 tearing lines over the whole image.

There are several methods to mitigate this problem, where the quality depends a lot on the method and implementation. It could look like this:

You can see that the image gets more blurry and has some ghosting to it.

But what if the image was rendered with true 480 lines every frame?

The result looks much better and out of the 3, this would be very likely the prefered way to view the content. So let's think about what would be required to have that.

(There are still some small combining artifacts in the shadows, which are a result of a clever rendering trick of the game, knowing it would only need to render 240 lines each frame, saving GPU processing power.)


The hack:

We already know that we cannot render to the same framebuffer that is currently displayed and there is also no space in the VRAM for another framebuffer, but why can't we just make the VRAM bigger so we have space?

This would not work on a real console, but on the PSX core we are free to do so. In fact, the same already happened with the 24 Bit rendering, so the infrastructure is there.

Also the game must render both odd and even lines every frame. This is much easier than it might sounds, because there is some special logic in the primitive renderers(e.g. for polygons) that prevents rendering the odd/even lines in interlaced mode. It can just be disabled and all lines will be drawn.

The third component is displaying a higher resolution image. The PSX video timing can only transfer about 300 visible lines maximum in PAL and about 256 lines maximum in NTSC. There is just not more time available.

Thankfully we have a special direct framebuffer mode with the HDMI Scaler in MiSTer. This is already in use for the GBA core high-res rendering and we will also use it here.

The direct framebuffer mode allows to define an area in DDR3 memory(where we store the VRAM anyway) and the Scaler will display it even if the video signals deliver a completly different resolution. This allows the core to display 480p content via HDMI, while at the same time we can keep the 480i signal for Analog out.


With all that in place, it seem like we have a stable conversion from 480i to 480p that works everywhere....not so fast!

The new feature isn't named hack without a reason. Unfortunatly, doing these tricks doesn't work universally for every game and situation. Due to the nature of the games usually rendering to a "single framebuffer", they might depend on it.

One example:

If you pause in Ehrgeiz, the game will only update the small area of the pause notification, marked in red. The rest of the screen will not be updated anymore.

This will lead to severe problems with our multi framebuffer approach, because the game is not rendering full frames anymore and instead depends on the content in the framebuffer being already there. But our additional framebuffer doesn't have this content or maybe different content, leading to all kinds of artifacts.


So is the idea dead? Not yet. We can do the 480p mode adaptive.

Let's assume we can detect and distinguish between the situation where the whole screen is rendered and where a static image is rendered. Then we could switch between 480p content for full screen rendering and 480i for mostly static screens, that don't show heavy combining artifacts anyway.

The quality of this solution now stands and falls with the detection of this "live full screen content". The core currently assumes that only if 3D content(with polygons) and large amounts of the screen (about 70% of the pixels or more) are rendered, it's worth switching into this mode.

This may leave out some games that could use the mode and may not filter out some games that won't work in this mode. That's why it's called a hack: you cannot use it universally for every game.

Another problem can occur when the additionally rendered lines will make the GPU performance drop below 60fps. This happens in "Tobal", where this hack therefore cannot be used.

Still, if you play a game where it works fine, the difference is huge.

For example Tekken 3, Ehrgeiz, Dead or Alive, Internal Section and even Bubsy 3D work fine with this mode. The hack covers many of the typical use cases already, as there are not many games with fast 480i content anyway. So if you play a 480i game, give it a try.

Will this solve all interlaced issues? No. A motion adaptive deinterlacing solution would certainly be great to have, but probably will not fit into the FPGA together with the PSX core.

Considering that this hack costs nearly no ressources at all and looks even better than motion adaptive when the game works with it, it's probably not a bad thing to have it.

Have fun!

Comments

AdrienAG

Interesting. That was my guess on the enhancements removal if it wouldn’t fit. Thank you for the explanation. I was thinking of few unported Gun games and fighting games that did not made the jump to PSX at the end of its life. But you’re right, most of it’s small catalog has been ported to PSX. 😊 Thanks!

Spiff

Would it be possible to do half res msaa for 15khz by downscaling 480p to 240p/480i?

FPGAzumSpass

While it might work in principle, the texts on the screen are often not made for it. For example in Ehrgeiz, the texts are unreadable if you reduce the vertical resolution.