### How to create your own simple 3D render engine in pure Java

3D render engines that are nowdays used in games and multimedia production are breathtaking in complexity of mathematics and programming used. Results they produce are correspondingly stunning.

Many developers may think that building even the simplest 3D application from scratch requires inhuman knowledge and effort, but thankfully that isn't always the case. Here I'd like to share with you how you can build your very own 3D render engine, fully capable of producing nice-looking 3D images.

Why would you want to build a 3D engine? At the very least, it will really help understanding how real modern engines do their black magic. Also it is sometimes useful to add 3D rendering capabilities to your application without calling to huge external dependencies. In case of Java, that means that you can build 3D viewer app with zero dependencies (apart from Java APIs) that will run almost anywhere - and fit into 50kb!

Of course, if you want to build big 3D applications with fluid graphics, you'll be much better off with using OpenGL/WebGL. Still, once you will have a basic understanding of 3D engine internals, more complex engines will seem much more approachable.

In this post, I will be covering basic 3d rendering with orthographic projection, simple triangle rasterization, z-buffering and flat shading. I will not be focusing on heavy performance optimizations and more complex topics like textures or different lighting setups - if you need that, consider using better suited tools for that, like OpenGL (there are lots of libraries that allow you to work with OpenGL even from Java).

Code examples will be in Java, but the ideas explained here can be applied to any language of your choice. For your convenience, I will be following along with small interactive JavaScript demos right here in the post.

Enough talk - let's begin!

#### GUI wrapper

First of all, we want to put at least something on screen. For that I will use very simple application with our rendered image and two sliders to adjust the rotation.

import javax.swing.*;
import java.awt.*;

public class DemoViewer {

public static void main(String[] args) {
JFrame frame = new JFrame();
Container pane = frame.getContentPane();
pane.setLayout(new BorderLayout());

// slider to control horizontal rotation
JSlider headingSlider = new JSlider(0, 360, 180);

// slider to control vertical rotation
JSlider pitchSlider = new JSlider(SwingConstants.VERTICAL, -90, 90, 0);

// panel to display render results
JPanel renderPanel = new JPanel() {
public void paintComponent(Graphics g) {
Graphics2D g2 = (Graphics2D) g;
g2.setColor(Color.BLACK);
g2.fillRect(0, 0, getWidth(), getHeight());

// rendering magic will happen here
}
};

frame.setSize(400, 400);
frame.setVisible(true);
}
}


The resulting window should resemble this:

Now let's add some essential model classes - vertices and triangles. Vertex is simply a structure to store our three coordinates (X, Y and Z), and triangle binds together three vertices and stores its color.

class Vertex {
double x;
double y;
double z;
Vertex(double x, double y, double z) {
this.x = x;
this.y = y;
this.z = z;
}
}

class Triangle {
Vertex v1;
Vertex v2;
Vertex v3;
Color color;
Triangle(Vertex v1, Vertex v2, Vertex v3, Color color) {
this.v1 = v1;
this.v2 = v2;
this.v3 = v3;
this.color = color;
}
}


For this post, I'll assume that X coordinate means movement in left-right direction, Y means movement up-down on screen, and Z will be depth (so Z axis is perpendicular to your screen). Positive Z will mean "towards the observer".

As our example object, I selected tetrahedron, as it's the easiest 3d shape I could think of - only 4 triangles are needed to describe it. Here's the visualization:

The code is very simple - we just create 4 triangles and add them to a list:

List tris = new ArrayList<>();
new Vertex(-100, -100, 100),
new Vertex(-100, 100, -100),
Color.WHITE));
new Vertex(-100, -100, 100),
new Vertex(100, -100, -100),
Color.RED));
new Vertex(100, -100, -100),
new Vertex(100, 100, 100),
Color.GREEN));
new Vertex(100, -100, -100),
new Vertex(-100, -100, 100),
Color.BLUE));


Resulting shape is centered at origin (0, 0, 0), which is quite convenient since we will be doing rotation around that point later.

Now let's put that on screen. For now, we'll ignore the rotation and will just show the wireframe. Since we are using orthographic projection, it's quite simple - just discard the Z coordinate and draw the resulting triangles.

g2.translate(getWidth() / 2, getHeight() / 2);
g2.setColor(Color.WHITE);
for (Triangle t : tris) {
Path2D path = new Path2D.Double();
path.moveTo(t.v1.x, t.v1.y);
path.lineTo(t.v2.x, t.v2.y);
path.lineTo(t.v3.x, t.v3.y);
path.closePath();
g2.draw(path);
}


Note how I applied translation before drawing all the triangles. That is done to put the origin (0, 0, 0) to the center of our drawing area - initially, 2d origin is located in top left corner of screen. Result should look like this:

You may not believe it yet, but that's our tetrahedron in orthographic projection, I promise!

Now we need to add rotation. To do that, I'll need to digress a little and talk about using matrices to achieve transformations on 3D points.

There are many possible ways to manipulate 3d points, but the most flexible is to use matrix multiplication. The idea is to represent your points as 3x1 vectors, and transformation is then simply multiplication by 3x3 matrix.

You take your input vector A:

$$A = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix}$$

and multiply it with transformation matrix T to get output vector B:

$$AT = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix} \begin{bmatrix} t_{xx} & t_{xy} & t_{xz} \\ t_{yx} & t_{yy} & t_{yz} \\ t_{zx} & t_{zy} & t_{zz} \end{bmatrix} = \begin{bmatrix} a_x t_{xx} + a_y t_{yx} + a_z t_{zx} & a_x t_{xy} + a_y t_{yy} + a_z t_{zy} & a_x t_{xz} + a_y t_{yz} + a_z t_{zz} \end{bmatrix} = \begin{bmatrix} b_x & b_y & b_z \end{bmatrix}$$

For example, here's how you would scale a point by 2:

$$\begin{bmatrix} 1 & 2 & 3 \end{bmatrix} \begin{bmatrix} 2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{bmatrix} = \begin{bmatrix} 1 \times 2 & 2 \times 2 & 3 \times 2 \end{bmatrix} = \begin{bmatrix} 2 & 4 & 6 \end{bmatrix}$$

You can't describe all possible transformations using 3x3 matrices - for example, translation is off-limits. You can achieve it with 4x4 matrices, effectively doing skew in 4D space, but that is beyond the scope of this tutorial.

Most useful transformations that we will need in this tutorial are scaling and rotating.

Any rotation in 3D space can be expressed as combination of 3 primitive rotations: rotation in XY plane, rotation in YZ plane and rotation in XZ plane. We can write out transformation matrices for each of those rotations as follows:

XY rotation matrix:

$$\begin{bmatrix} cos\theta & -sin\theta & 0 \\ sin\theta & cos\theta & 0 \\ 0 & 0 & 1 \end{bmatrix}$$

YZ rotation matrix:

$$\begin{bmatrix} 1 & 0 & 0 \\ 0 & cos\theta & sin\theta \\ 0 & -sin\theta & cos\theta \end{bmatrix}$$

XZ rotation matrix:

$$\begin{bmatrix} cos\theta & 0 & -sin\theta \\ 0 & 1 & 0 \\ sin\theta & 0 & cos\theta \end{bmatrix}$$

Here comes the magic: if you need to first rotate a point in XY plane using transformation matrix $T_1$, and then rotate it in YZ plane using transfromation matrix $T_2$, you can simply multiply $T_1$ with $T_2$ and get a single matrix to describe the whole rotation:

$$(AT_1)T_2 = A(T_1T_2)$$

This is a very useful optimization - instead of recomputing multiple rotations on each point, you precompute the matrix once and then use it in your pipeline.

Enough of the scary math stuff, let's get back to code. We will create utility class Matrix3 that will handle matrix-matrix and vector-matrix multiplication:

class Matrix3 {
double[] values;
Matrix3(double[] values) {
this.values = values;
}
Matrix3 multiply(Matrix3 other) {
double[] result = new double[9];
for (int row = 0; row < 3; row++) {
for (int col = 0; col < 3; col++) {
for (int i = 0; i < 3; i++) {
result[row * 3 + col] +=
this.values[row * 3 + i] * other.values[i * 3 + col];
}
}
}
return new Matrix3(result);
}
Vertex transform(Vertex in) {
return new Vertex(
in.x * values[0] + in.y * values[3] + in.z * values[6],
in.x * values[1] + in.y * values[4] + in.z * values[7],
in.x * values[2] + in.y * values[5] + in.z * values[8]
);
}
}


Now we can bring to life our rotation sliders. The horizontal slider would control "heading" - in our case, rotation in XZ direction (left-right), and vertical slider will control "pitch" - rotation in YZ direction (up-down).

Let's create our rotation matrix and add it into our pipeline:

double heading = Math.toRadians(headingSlider.getValue());
Matrix3 transform = new Matrix3(new double[] {
0, 1, 0,
});

g2.translate(getWidth() / 2, getHeight() / 2);
g2.setColor(Color.WHITE);
for (Triangle t : tris) {
Vertex v1 = transform.transform(t.v1);
Vertex v2 = transform.transform(t.v2);
Vertex v3 = transform.transform(t.v3);
Path2D path = new Path2D.Double();
path.moveTo(v1.x, v1.y);
path.lineTo(v2.x, v2.y);
path.lineTo(v3.x, v3.y);
path.closePath();
g2.draw(path);
}


You'll also need to add a listeners on heading and pitch sliders to force redraw when you drag the handle:

headingSlider.addChangeListener(e -> renderPanel.repaint());


Here's what you should get working (this example is interactive - try dragging the handles!):

As you may have noticed, up-down rotation doesn't work yet. Let's add next transform:

Matrix3 headingTransform = new Matrix3(new double[] {
0, 1, 0,
});
Matrix3 pitchTransform = new Matrix3(new double[] {
1, 0, 0,
0, Math.cos(pitch), Math.sin(pitch),
0, -Math.sin(pitch), Math.cos(pitch)
});


Observe that both rotations now work and combine together nicely:

Up to this point, we were only drawing the wireframe of our shape. Now we need to start filling up those triangles with some substance. To do this, we first need to "rasterize" the triangle - convert it to list of pixels on screen that it occupies.

I'll use relatively simple, but inefficient method - rasterization via barycentric coordinates. Real 3d engines use hardware rasterization, which is very fast and efficient, but we can't use the graphic card and so will be doing it manually in our code.

The idea is to compute barycentric coordinate for each pixel that could possibly lie inside the triangle and discard those that are outside. The following snippet implements the algorithm. Note how we started using direct access to image pixels.

BufferedImage img =
new BufferedImage(getWidth(), getHeight(), BufferedImage.TYPE_INT_ARGB);

for (Triangle t : tris) {
Vertex v1 = transform.transform(t.v1);
Vertex v2 = transform.transform(t.v2);
Vertex v3 = transform.transform(t.v3);

// since we are not using Graphics2D anymore,
// we have to do translation manually
v1.x += getWidth() / 2;
v1.y += getHeight() / 2;
v2.x += getWidth() / 2;
v2.y += getHeight() / 2;
v3.x += getWidth() / 2;
v3.y += getHeight() / 2;

// compute rectangular bounds for triangle
int minX = (int) Math.max(0, Math.ceil(Math.min(v1.x, Math.min(v2.x, v3.x))));
int maxX = (int) Math.min(img.getWidth() - 1,
Math.floor(Math.max(v1.x, Math.max(v2.x, v3.x))));
int minY = (int) Math.max(0, Math.ceil(Math.min(v1.y, Math.min(v2.y, v3.y))));
int maxY = (int) Math.min(img.getHeight() - 1,
Math.floor(Math.max(v1.y, Math.max(v2.y, v3.y))));

double triangleArea =
(v1.y - v3.y) * (v2.x - v3.x) + (v2.y - v3.y) * (v3.x - v1.x);

for (int y = minY; y <= maxY; y++) {
for (int x = minX; x <= maxX; x++) {
double b1 =
((y - v3.y) * (v2.x - v3.x) + (v2.y - v3.y) * (v3.x - x)) / triangleArea;
double b2 =
((y - v1.y) * (v3.x - v1.x) + (v3.y - v1.y) * (v1.x - x)) / triangleArea;
double b3 =
((y - v2.y) * (v1.x - v2.x) + (v1.y - v2.y) * (v2.x - x)) / triangleArea;
if (b1 >= 0 && b1 <= 1 && b2 >= 0 && b2 <= 1 && b3 >= 0 && b3 <= 1) {
img.setRGB(x, y, t.color.getRGB());
}
}
}

}

g2.drawImage(img, 0, 0, null);


Quite a lot of code, but now we have colored tetrahedron on our displays:

If you play around with the demo, you'll notice that not all is well - for example, blue triangle is always above others. It happens becase we are currently painting the triangles one after another, and blue triangle is last - thus it is painted over all others.

To fix this I will introduce the concept of z-buffer (or depth buffer). The idea is to build an intermediate array during rasterization that will store depth of last seen element at any given pixel. When rasterizing triangles, we will be checking that pixel depth is less than previously seen, and only color the pixel if it is above others.

double[] zBuffer = new double[img.getWidth() * img.getHeight()];
// initialize array with extremely far away depths
for (int q = 0; q < zBuffer.length; q++) {
zBuffer[q] = Double.NEGATIVE_INFINITY;
}

for (Triangle t : tris) {
// handle rasterization...
// for each rasterized pixel:
double depth = b1 * v1.z + b2 * v2.z + b3 * v3.z;
int zIndex = y * img.getWidth() + x;
if (zBuffer[zIndex] < depth) {
img.setRGB(x, y, t.color.getRGB());
zBuffer[zIndex] = depth;
}
}


Now you can see that our tetrahedron actually has one white side:

We now have a functioning rendering pipeline!

But we are not finished here. In real life, perceived color of the surface varies with light source positions - if only a small amount of light is incident to the surface, we perceive that surface as being darker.

In computer graphics, we can achieve similar effect by using so-called "shading" - altering the color of the surface based on its angle and distance to lights.

Simplest form of shading is flat shading. It takes into account only the angle between surface normal and direction of the light source. You just need to find cosine of angle between those two vectors and multiply the color by the resulting value. Such approach is very simple and cheap, so it is often used for high-speed rendering when more advanced shading technologies are too computationally expensive.

First, we need to compute normal vector for our triangle. If we have triangle ABC, we can compute its normal vector by calculating cross product of vectors AB and AC and then dividing resulting vector by its length.

Cross product is a binary operation on two vectors that is defined in 3d space as follows:

$$u \times v = \begin{bmatrix} u_x & u_y & u_z \end{bmatrix} \times \begin{bmatrix} v_x & v_y & v_z \end{bmatrix} = \begin{bmatrix} u_y \times v_z - u_z \times v_y & u_z \times v_x - u_x \times v_z & u_x \times v_y - u_y \times v_x \end{bmatrix}$$

Here's the visual explanation of what cross product does:

for (Triangle t : tris) {
// transform vertices before calculating normal...

Vertex norm = new Vertex(
ab.y * ac.z - ab.z * ac.y,
ab.z * ac.x - ab.x * ac.z,
ab.x * ac.y - ab.y * ac.x
);
double normalLength =
Math.sqrt(norm.x * norm.x + norm.y * norm.y + norm.z * norm.z);
norm.x /= normalLength;
norm.y /= normalLength;
norm.z /= normalLength;
}


Now we need to calculate cosine between triangle normal and light direction. For simplicity, we will assume that our light is positioned directly behind the camera at some infinite distance (such configuration is called "directional light") - so our light source direction will be $\begin{bmatrix} 0 & 0 & 1 \end{bmatrix}$.

Cosine of angle between vectors can be calculated using this formula:

$$cos\theta = \frac{A \cdot B}{||A|| \times ||B||}$$

where $||A||$ is length of a vector, and $A \cdot B$ is dot product of vectors:

$$A \cdot B = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix} \cdot \begin{bmatrix} b_x & b_y & b_z \end{bmatrix} = a_x \times b_x + a_y \times b_y + a_z \times b_z$$

Notice that length of our light direction vector ($\begin{bmatrix} 0 & 0 & 1 \end{bmatrix}$) is 1, as well as the length of triangle normal (we already have normalized it). Thus the formula simply becomes:

$$cos\theta = A \cdot B = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix} \cdot \begin{bmatrix} b_x & b_y & b_z \end{bmatrix}$$

Also observe that only Z component of light direction vector is non-zero, so we can simplify further:

$$cos\theta = A \cdot B = \begin{bmatrix} a_x & a_y & a_z \end{bmatrix} \cdot \begin{bmatrix} 0 & 0 & 1 \end{bmatrix} = a_z$$

The code is now trivial:

double angleCos = Math.abs(norm.z);


We drop the sign from the result because for our simple purposes we don't care which triangle side is facing the camera. In real application, you will need to keep track of that and apply shading accordingly.

Now that we have our shade coefficient, we can apply it to triangle color. Naive version may look as follows:

public static Color getShade(Color color, double shade) {
int red = (int) (color.getRed() * shade);
int green = (int) (color.getGreen() * shade);
int blue = (int) (color.getBlue() * shade);
return new Color(red, green, blue);
}


While it will give us some shading effect, it will have much quicker falloff than we need. That happens because Java uses sRGB color space, which is already scaled to match our logarithmic color perception.

So we need to convert each color from scaled to linear format, apply shade, and then convert back to scaled format. Real conversion from sRGB to linear RGB is quite involved, so I won't implement the full spec here - just the basic approximation.

public static Color getShade(Color color, double shade) {
double redLinear = Math.pow(color.getRed(), 2.4) * shade;
double greenLinear = Math.pow(color.getGreen(), 2.4) * shade;
double blueLinear = Math.pow(color.getBlue(), 2.4) * shade;

int red = (int) Math.pow(redLinear, 1/2.4);
int green = (int) Math.pow(greenLinear, 1/2.4);
int blue = (int) Math.pow(blueLinear, 1/2.4);

return new Color(red, green, blue);
}


Observe how our tetrahedron comes to life:

Now we have a working 3d render engine, with colors, lighting and shading, and it took us about 200 lines of code - not bad!

Here's one bonus for you - we can quickly create a sphere approximation from this tetrahedron. It can be done by repeatedly subdividing each triangle into four smaller ones and "inflating":

public static List inflate(List tris) {
List result = new ArrayList<>();
for (Triangle t : tris) {
Vertex m1 =
new Vertex((t.v1.x + t.v2.x)/2, (t.v1.y + t.v2.y)/2, (t.v1.z + t.v2.z)/2);
Vertex m2 =
new Vertex((t.v2.x + t.v3.x)/2, (t.v2.y + t.v3.y)/2, (t.v2.z + t.v3.z)/2);
Vertex m3 =
new Vertex((t.v1.x + t.v3.x)/2, (t.v1.y + t.v3.y)/2, (t.v1.z + t.v3.z)/2);
}
for (Triangle t : result) {
for (Vertex v : new Vertex[] { t.v1, t.v2, t.v3 }) {
double l = Math.sqrt(v.x * v.x + v.y * v.y + v.z * v.z) / Math.sqrt(30000);
v.x /= l;
v.y /= l;
v.z /= l;
}
}
return result;
}


Here's what you will see:

You can find full source code for this app here. It's only 220 lines and has no dependencies - you can just compile and start it!

I will finish this article by recommending one awesome book: 3D Math Primer for Graphics and Game Development. It explains all the details of rendering pipelines and math involved - definitely a worthy read if you are interested in rendering engines.

1. Thanks for a huge article!!!

Maybe a bit offtop, but still: if I put this sample to an Android, will it outperform a similar app that uses OpenGL? Basically, will it benefit to use a video chip on board? Thank again

1. You're welcome!

No, OpenGL will be much faster - it uses a lot of clever optimizations and also benefits from graphic card. This sample is purely software, so it is at a disadvantage.

2. Could you have skipped the manual flat-shading by using g2.fill(path)?

1. Only until z-buffer came into play - after that, it will be impossible to determine z-coordinate from g2.fill.

2. Is there any clever way to reorganize the shapes in the render array, so they come out in order? Could I sort polygons by their maximum Z value?

3. I don't think so. Consider the case of two intersecting triangles - in some pixels triangle 1 will be above, in other triangle 2 will be above. So there will be no strictly defined order.

4. Assuming no two polygons intersect, would it work?

5. Again, not for all cases - imagine configuration with 3 shapes (A,B,C), where A partially overlays B and is partially overlaid by C, B in turn overlays C, and lastly C is partially obscured by B and is above A. Again, no strict ordering.
(something like that famous Escher's work: http://files.harrowakker.webnode.nl/200000058-28fec29f90/EscherOmhoogOmlaag.jpg)

6. Oh, ok. Thought I could get away with using g2.fill and clever ordering. Welp, time to rewrite my Renderable interface

3. Could I implement this using polygons that take any number of inputs, as opposed to just triangles?

1. Yes. You just need to create rasterization method for your polygons. But as far as I know, this will involve splitting polygon into several triangles and then rasterizing those - so you're back at square one. That's the basic reasoning behind the fact that video cards only work with triangles - all polygons can be viewed as a group of adjacent triangles, so it's much simpler to unify all interfaces and view the whole world as lots of triangles.

4. can you provide a download link for the full project? some of this is not very clear.

1. Here it is: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99

5. Why do you need to use barycentric coordinates when determining if a pixel lies inside the triangle's area? Isn't it possible to just use the pixel coordinates and paint the triangle accordingly?

1. It is the simplest method, easier to understand and implement - so I decided to use it in this tutorial. There are several others, but they require vertex sorting and complex logic with many corner cases. Here's the overview: http://www.sunshine2k.de/coding/java/TriangleRasterization/TriangleRasterization.html

2. Is it possible to use the barycentric coordinate system in this tutorial to get texture coordinates on an image?

3. Yes, I suppose. You will need to assign texture coordinates to vertices, and then interpolate using barycentric coordinates to get texture coordinates inside the triangle.

4. I have been able to understand and create my own 3D rendering engine using the great tutorial you have provided and a lot of other documents that explain all of the mathematics beind it. With all of this said, I still have a few questions. My major one right now is if it is possible to use the zbuffer with only two baricentric coordinates. I understand the use of 3, but if you use the statement:
if(b1 >=0 && b2 >== 0 && b1 + b2 <= 1){
.....
}
you can slightly speed up the performance of the engine but I have found that only calculating and using 2 baricentric coordinates doesn't work when applied to the zbuffer. Any insight on a possible solution would be very helpful.

5. Unless you check the third coordinate as well, you may get points outside triangle area (b3 may be negative,for example). If you want to improve the performance, it would be much better to remove barycentric computations completely and use better rasterization algorithms.

6. I figured that it is possible to substitute the third baricentric coordinate by subtracting the sum of the first and second baricentric coordinate from 1 (Not too long after I posted the question actually...). This way you can successfully calculate the correct distance for the zbuffer. It is even possible to use the baricentric coordinates directly as texture coordinates as well.

Now that you mention it, do you know any better algorithms for rasterizing triangles I might be able to look into?

7. Yes, you can look at "standard" algorithm or Bresenham algorithm, described here: http://www.sunshine2k.de/coding/java/TriangleRasterization/TriangleRasterization.html

8. How would you texture an object using this algorithm?

9. That's harder. If you need to go that way, you will probably still need some form of barycentric coordinates. Here's a good explanation, with optimized rasterization: http://www.scratchapixel.com/lessons/3d-basic-rendering/rasterization-practical-implementation/perspective-correct-interpolation-vertex-attributes?url=3d-basic-rendering/rasterization-practical-implementation/perspective-correct-interpolation-vertex-attributes

10. Do you know any good sources for learning how to use OpenCL or LWJGL? I am curious to see how fast my modified 3D rendering program would run using the GPU to render the objects.

11. No, never tried going that route.

6. Is there a way to implement a camera position into this program or is it purely a fixed view system?

1. Of course. In fact, rotation examples in the article do exactly that - you can think of rotating object in front of fixed camera as of rotating camera around a fixed object.

As far as I know, in real 3D-engines camera positions are also implemented this way - camera is always positioned at (0,0,0) and rendered scene is transformed into that "camera space".

2. Instead of working in pixel coordinates, you could use homogenous coordinates (x and y axis is between -1 and 1). From there, you can use a projection, view and model matrix to control the vertex positions on the screen.

7. How do you add an XY rotation?

1. The first rotation matrix in the article achieves just that. Or maybe you are looking for something else?

2. I mean, I can only rotate it in 2 ways. How can I rotate it in the 3rd way?

3. Current examples only show heading and pitch transformations. You need to append roll transformation - I've done a quick tweak of the code for you: http://pastebin.com/7r222Z6r (lines 68-74 are relevant).

8. I work with BlueJ and when I try to compile the triangle-class (from the 2nd code example) it says, that it cannot find the class Color. Could you help me out with this?

1. That's an easy fix - you probably placed that into a separate file, so it can't find the required imports. Add "import java.awt.*" at the top of the file.

You can also look at the full code here: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99

2. Thank you :)

9. I have more or less created my own 3D engine in Java and I'm using scan line rasterisation and refreshing at 60Hz but the problem I am encountering is when painting with the graphics object it cannot paint enough between frames and gives me a semi complete surface with artifacting near the bottom. And when I try drawing the surface on a bufferedimage and render that with a graphics object I get a refresh rate of 60Hz. Any advice on what I should do? Change the rasterisation method etc. Thank you.

1. Edit: I have also overridden the paint method to try and reduce latency without much success.

2. You are looking for double-buffering. Just call .setDoubleBuffered(true) on your top-level component.

Essentially, it is almost the same as your solution with buffered image - all drawing commands are first output to temporary image, and only after the drawing is complete that image is drawn on actual screen.

10. Hi,
I noticed that the program gave error at line 128 & 129 "->"
DemoViewer.java:128: error: illegal start of expression
DemoViewer.java:129: error: illegal start of expression

1. Hi! Which java version are you using? Seems it fails on lambda expressions, which were introduced in Java 8.

For older java versions, you can rewrite those lines as follows: headingSlider.addChangeListener(new ChangeListener { @Override void stateChange(ChangeEvent e) { renderPanel.repaint(); } });

11. Hello Rogach - I am really impressed with how simple this demo is. However, it only shows an affine projection. How difficult would it be to make it a fully 4x4 matrix for perspective projection? I am trying to build a simple cube viewer that I can control the FoV. But not much point unless fully perspective. Can you help?

1. Hi! You probably don't need 4x4 matrix for perspective projection - you can just divide by Z coordinate (but just be careful with negative z values).

But camera control will feel weird in that case, since in current implementation camera is strictly situated at (0,0,0) and there is no way to handle translations in 3x3 matrix. Expanding to 4x4 matrix should not be hard - just add W coordinate to Vertex, replace Matrix3 class with Matrix4 (with appropriate changes), add a [0,0,0,1] row and column to heading, roll and pitch transforms, and add a pan transform somewhere.

2. Here is some code that is able to convert the original affine screen coordinates to perspective projection coordinates. Don't worry about the extra array lists, those are just for my own organization purposes.

double r = Math.pow(objects.get(o).faces.get(i).v.get(ii).zDisplay, 2) + Math.pow(objects.get(o).faces.get(i).v.get(ii).x, 2) + Math.pow(objects.get(o).faces.get(i).v.get(ii).y, 2);
r = Math.sqrt(r);
r = ((r * Math.PI) / (360.0d / FOVslider.getValue()));
r = (r / frame.getHeight());
objects.get(o).faces.get(i).v.get(ii).xDisplay = objects.get(o).faces.get(i).v.get(ii).xDisplay / r;
objects.get(o).faces.get(i).v.get(ii).yDisplay = objects.get(o).faces.get(i).v.get(ii).yDisplay / r;

I use xdisplay and ydisplay as separate values for displaying each vertex on the screen so that I can modify them without worrying about accidentally tampering with other variable values.

3. This looks more like fish-eye projection, not perspective projection.

For example, consider several objects with equal Z coordinate. Under this projection, object close to the center will get one value of R, but for object far away from the center (but still at the same Z) R will be greater (2x, for example). Thus objects away from the center will be smaller (since you divide by R).

4. Yes, it does fish-eye the image, but technically it is mathematically correct perspective projection. For it to look like proper perspective projections in computer graphics, all you have to do is divide by the Z value, not the radial distance to the camera.

5. Hi! Saw your comment and was wondering if you were able to do this with a positionable camera

12. Ok - I'll give it a go. i am pretty new to this stuff. What I like about your implementation is that it is almost entirely raw java - you are not using the Java3D API, which already has its own camera class and so on. The way you have done it means you need to understand every aspect to get it to work. If you already have an example with a 4x4 matrix that would be useful...

1. wow - that was surprisingly easy. However, I do not really have a perspective view (just distorted isometric). Still need to do some maths on the w value (ie scale z or w?). Any ideas? I changed your tetrahedron to a cube. Code is here: http://wyeldsoft.com/temp/DemoViewerPersp.java

2. I don't think you can achieve perspective projection using only a matrix - basically, you need to divide X and Y by Z coordinate, and that's not possible to do via matrix multiplication on the vector. For example, OpenGL's perspective projection matrix is only needed for clipping - actual perspective projection happens manually after all the matrices.

I took your code, and added the necessary tweaks for it to work with perspective transform. The actual magic happens in lines 132-133 (fov angle to scaling computation) and lines 169-174 (division by Z).

3. You'll probably want to rewrite the GUI to see the effects better - you now need 6 sliders: 3 for camera XYZ position and 3 for camera rotation.

4. Any ideas how to adjust the distance from the nominal camera position whilst adjusting the FoV? What I am trying to do is create a slider which adjusts FoV between 0 and 180 degrees (which is done). The problem is of course as it approaches 180 degrees the cube is a long way from the camera and vice versa as it approaches 0 degrees it is too close to the camera. If there was a way to maintain relative size or proportion during the transform then you could see the cube go from obtuse perspective to acute. A bit like going from a wide-angle lens to a telephoto lens but the object in view remains roughly the same size.

here is the section of code I am working with from Rogach:

double fov = (1.0/Math.tan(fovAngle))*180;

13. Thanks for very interesting article, I could make all examples!

14. Hello Rogach. Can this approach be used to create simple 3D view for pipe bending machine?
What I mean is this: when you bend a pipe you basically have 3 'parts' of bending: straight, curve(or bend) and rotation (of the pipe).
For example: to bend a pipe in U shape you need:
1. Straight: 500 mm
2. Bend: 90 degrees
3. straight: 100 mm
4. Bend: 90 degrees
What I like to have is to draw a 'pipe' which follow this steps and at the end you have a complete U shape bended pipe on the screen.
Can you help me with this?

1. You could simply specify a cylinder instead of a tetrahedron. To do this, it would be easier to include a parser for external *.obj or other 3D model format, instead of writing all the vertex locations for cylinder etc. The cylinder object would have to have enough segments for bending. The bend operations would be performed on the model and simply displayed in Rogach's 3D viewer.

15. A link for complete source code would be great. Also, you are not very clear on where to insert the lists and double code and stuff.

1. Sorry for the late reply, comments were broken on the article. I've included the link to the complete source code at the end of the article, here it is just in case: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99/4f2aaf20a468867dc195cdc08a02e5705c2cc95c

16. This comment has been removed by the author.

17. any tips on import/translators for object import

18. How do i move the object in z coordinate?

19. My program has the world z axis going towards the camera and the x axis going left instead of the standard z forward, y up and x right. How can I change this?

1. Sorry for the late reply, comments were broken on the article.
You can either preprocess the coordinates before performing the drawing (e.g. simply copy the object and replace X, Z with their negatives), or you can tweak the rendering code itself - but that would be a bit more difficult since the underlying medium (BufferedImage) expects X axis to increase to the right.

20. Any chance you would be able to do this with camera motion as well? Like in a game engine. If you would be willing to do this, that would be immensely helpful for something I'm trying to do.

21. This comment has been removed by the author.

22. I have used this to make a fairly basic (and not very efficient) 3D render engine. How can I cull out the "backside" of the triangles. So that only one side of the triangle renders, like in most render engines.

1. We compute normal vector for each triange (line 87 in the full code), so you can use sign of Z coordinate of this vector to determine if triangle faces the camera or not (e.g. skip drawing if norm.z is negative).

23. Is this strictly a third person perspective or can you somehow move the rotation to first person? I cant think how I could make this happen

1. Yes, you can rotate the camera without changing the position. I described the basic idea in the comment under the source code: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99/4f2aaf20a468867dc195cdc08a02e5705c2cc95c#gistcomment-3195590

24. How can i change the camera position?

1. and camera rotating

2. Here's the code that is responsible for transformation from world space to camera space: https://gist.github.com/Rogach/f3dfd457d7ddb5fcfd99/4f2aaf20a468867dc195cdc08a02e5705c2cc95c#file-demoviewer-java-L52
You'll need to tweak it according to your requirements.

25. This was an awesome summary! 🥰
Parts of it I didn't understand, but by looking up those concepts on YouTube, I eventually got it all.
It has been a dream of mine for 30+ years to actually understand basic 3D rendering at a low level - and that finally happened today 😄

Thanks a bundle! 👍👍🏆