Converting pixel coordinates/size to open gl coordinates’ system?

  2d, c++, coordinates, opengl, opengl-compat

Is there a formula to draw, for example, a (100 * 200) rectangle on (10, 10) (regular 2D ortho coordinate) using opengl?

I have found no real define answers to this question on internet apart from this post.

public: float pixelToScreenX(int screen_width, int width, int x) { return (2.0 * tan(0.5 * width) * -1) * (width / screen_width) * -1; }

public: float pixelToScreenY(int screen_height, int height, int y) { return (2.0 * tan(0.5 * height) * -1) * (height / screen_height) * -1; }

Not working, it’s all equal to 0. I don’t understand the math so I can’t debug it myself.

I have my game loop, rgb colors converted to again, this weird open gl stuff, etc.. but this coordinate system threw me off. Also glOrtho, whatever I do, wherever I place it, do not work. Same for glViewport. So that’s why I would rather find a formula to have my rectangle data (x, y, width, height) suiting opengl’s weird system.

I also found this but that level of math is far beyond my current level.
I’m using to isometric formula & all but matrix is chinese to me.

My current drawing code (rect is just a class containing the rectangle coordinates & size) :


glVertex2f(0, 0);
glVertex2f(-100, 0);
glVertex2f(-100, 100);
glVertex2f(0, 100);



But what I wish to do is :


glVertex2f(rect.x, rect.y);
glVertex2f(rect.x1, rect.y);
glVertex2f(rect.x1, rect.y1);
glVertex2f(rect.x, rect.y1);



Which works with Opengl’s weird system.

(don’t pay attention to glColor, this is a function I made that use my custom class Color (which convert hexadecimal color code to opengl colors). Each of my custom class rectangle have one)

Where rect.x, rect.x1, etc.. are (x, y) 2D ortho coordinates converted to opengl’s gibrish when I create my rectangle like this :

    color = Color(new_color);

    x = pixelToScreenX(screen_width, width, new_x);
    y = pixelToScreenY(screen_height, height, new_y);

    x1 = pixelToScreenX(screen_width, width, new_x + width);
    y1 = pixelToScreenY(screen_height, height, new_y + height);

Basically, I was wondering how can I write pixelToScreenWidth and pixelToScreenHeight

Btw, I do this before any drawing :


Edit to the silent downvoters :

This is such a toxic behavior… I spent days on this issue. I don’t know what else to do. I tried so many things but I cannot fix it.

Source: Windows Questions C++