I am trying to using OpenRL baking lightmap

I used to set the lightmap texture as the render target and use UV-coordinates as vertex position coordinates in shader to bake lighting into lightmaps when using OpenGL as before, because I can set gl_position to UV-cooedinates, and pass vertex position through varying data to calculating diffuse light in fragment shader.

When I use OpenRL, I can not modify rl_PixelCoord in ray shader, so I can not render to a flat uv space to generate lightmaps.And rl_Position must be set right to calculate lighting.

How can I modify rl_PixelCoord in a ray shader to output pixel to the flat uv space to generate lightmaps.

Thanks in advance.

I think I have got an idea. At the moment a ray shader call accumulate() to accumulate ligting result, emit a new ray emit from hit position to a position which calculating accroding to lightmap uv-coordinate, and then accumulate() again to output lighting result to designed uv-coordinate to the frame buffer.
I will try it.

It seems won’t be okay. The pixel target uv-coordnate has been set since it emited early.

It seems that I have got it. When unwraping uv-coordnate from rendering model, create a texture as same size with lightmap size, writing world positon to it’s corresponding uv. Then run ray tracing, render model for baking using the generated texture, for each pixel, emit a ray from a position offset along light direction from it’s world position to it’s world position, to test whether current pixel hit the light to get it’s occluded or not. So I can bake a shadow map at least. I’ll work on it on.

I have almost got it i think.
Bake per object one by one.For an instance,rendering 2 render targets using opengl, use it’s uv instead of ndc position to output a world position texture and a world normal texture.
t3
and then use these two textures as input of openrl baking frame shader, to emit ray from world position + normal to -normal.Then we can get a light map.
You can learn how to map a opengl fbo color attach to a openrl color input from OpenRL-Hybrid-Example.
Next i will do further work e.g. allocate a single uv for every model, and padding them together, and even emit more rays to get photographic light map. I hope this comment will help you.