问题描述
我正在尝试为一个简单的 2D 游戏创建一个程序动画引擎,它可以让我用少量图像创建漂亮的动画(类似于这种方法,但对于 2D:http://www.gdcvault.com/play/1020583/Animation-Bootcamp-An-Indie-方法)
I'm trying to create a procedural animation engine for a simple 2D game, that would let me create nice looking animations out of a small number of images (similar to this approach, but for 2D: http://www.gdcvault.com/play/1020583/Animation-Bootcamp-An-Indie-Approach)
目前我有保存不同动画对象数据的关键帧,关键帧是表示以下内容的浮点数组:
At the moment I have keyframes which hold data for different animation objects, the keyframes are arrays of floats representing the following:
translateX、translateY、scaleX、scaleY、旋转(度)
translateX, translateY, scaleX, scaleY, rotation (degrees)
我想将 skewX、skewY、taperTop 和 taperBottom 添加到此列表中,但无法正确渲染它们.
I'd like to add skewX, skewY, taperTop, and taperBottom to this list, but I'm having trouble properly rendering them.
这是我尝试在精灵顶部实现锥形以使其具有梯形形状:
This was my attempt at implementing a taper to the top of the sprite to give it a trapezoid shape:
float[] vert = sprite.getVertices();
vert[5] += 20; // top-left vertex x co-ordinate
vert[10] -= 20; // top-right vertex x co-ordinate
batch.draw(texture, vert, 0, vert.length);
不幸的是,这会产生一些奇怪的纹理变形.
Unfortunately this is producing some weird texture morphing.
我有一点谷歌和查看 StackOverflow 并发现了这个,这似乎是我遇到的问题:
I had a bit of a Google and a look around StackOverflow and found this, which appears to be the problem I'm having:
http://www.xyzw.us/~cass/qcoord/
但是我不明白它背后的数学原理(什么是 s、t、r 和 q?).
However I don't understand the maths behind it (what are s, t, r and q?).
谁能解释的简单一点?
推荐答案
基本上,四边形越不像矩形,由于在形状上线性插值纹理坐标的效果,外观越差.构成四边形的两个三角形被拉伸到不同的大小,因此线性插值使接缝非常明显.
Basically, the less a quad resembles a rectangle, the worse the appearance due to the effect of linearly interpolating the texture coordinates across the shape. The two triangles that make up the quad are stretched to different sizes, so linear interpolation make the seam very noticeable.
对于片段着色器处理的每个片段,每个顶点的纹理坐标都是线性插值的.纹理坐标通常与已经划分的对象的大小一起存储,因此坐标在 0-1 的范围内,对应于纹理的边缘(并且超出此范围的值被钳制或环绕).这也是任何 3D 建模程序导出网格的典型方式.
The texture coordinates of each vertex are linearly interpolated for each fragment that the fragment shader processes. Texture coordinates typically are stored with the size of the object already divided out, so the coordinates are in the range of 0-1, corresponding with the edges of the texture (and values outside this range are clamped or wrapped around). This is also typically how any 3D modeling program exports meshes.
对于梯形,我们可以通过预先将纹理坐标乘以宽度,然后在线性插值后将宽度从纹理坐标中除以来限制失真.这就像弯曲两个三角形之间的对角线,使其斜率在梯形较宽边的角处更水平.这是一张有助于说明它的图片.
纹理坐标通常表示为具有分量 U 和 V,也称为 S 和 T 的 2D 向量.但是,如果您想从分量中除以大小,则需要另外一个分量来除以插值后,这称为 Q 分量.(如果您在 3D 纹理而不是 2D 纹理中查找某些东西,则 P 分量将用作纹理中的第三个位置.
Texture coordinates are usually expressed as a 2D vector with components U and V, also known as S and T. But if you want to divide the size out of the components, you need one more component that you are going to divide by after interpolation, and this is called the Q component. (The P component would be used as the third position in the texture if you were looking up something in a 3D texture instead of a 2D texture).
现在困难的部分来了... libgdx 的 SpriteBatch 不支持 Q 组件所需的额外顶点属性.因此,您可以克隆 SpriteBatch 并仔细检查并修改它以在 texCoord 属性中添加一个额外的组件,或者您可以尝试重新利用现有的颜色属性,尽管它存储为无符号字节.
Now here comes the hard part... libgdx's SpriteBatch doesn't support the extra vertex attribute necessary for the Q component. So you can either clone SpriteBatch and carefully go through and modify it to have an extra component in the texCoord attribute, or you can try to re-purpose the existing color attribute, although it's stored as an unsigned byte.
无论如何,您都需要预先划分宽度的纹理坐标.一种简化的方法是,不使用四个顶点的四边形的实际大小,而是获取梯形的顶部和底部宽度的比率,因此我们可以将顶部部分视为宽度 1,因此将它们保留一个人.
Regardless, you will need pre-width-divided texture coordinates. One way to simplify this is to, instead of using the actual size of the quad for the four vertices, get the ratio of the top and bottom widths of the trapezoid, so we can treat the top parts as width of 1 and therefore leave them alone.
float bottomWidth = taperBottom / taperTop;
然后您需要修改 TextureRegion 的现有纹理坐标以将它们预乘以宽度.由于上面的简化,我们可以不考虑梯形顶边的顶点,但是两个窄边顶点的 U 和 V 坐标需要乘以 bottomWidth
.每次更改 TextureRegion 或锥度值之一时,您都需要重新计算它们并将它们放入顶点数组中.
Then you need to modify the TextureRegion's existing texture coordinates to pre-multiply them by the widths. We can leave the vertices on the top side of the trapezoid alone because of the above simplification, but the U and V coordinates of the two narrow-side vertices need to be multiplied by bottomWidth
. You would need to recalculate them and put them into your vertex array every time you change the TextureRegion or one of the taper values.
在顶点着色器中,您需要将额外的 Q 分量传递给片段着色器.在片段着色器中,我们通常使用按大小划分的纹理坐标来查找纹理颜色,如下所示:
In the vertex shader, you would need to pass the extra Q component to the fragment shader. In the fragment shader, we normally look up our texture color using the size-divided texture coordinates like this:
vec4 textureColor = texture2D(u_texture, v_texCoords);
但在我们的例子中,我们仍然需要除以 Q 分量:
but in our case we still need to divide by that Q component:
vec4 textureColor = texture2D(u_texture, v_texCoords.st / v_texCoords.q);
但是,这会导致读取依赖纹理,因为我们在将向量传递到纹理函数之前对其进行了修改.GLSL 提供了一个自动执行上述操作的函数(我假设不会导致依赖纹理读取):
However, this causes a dependent texture read because we are modifying a vector before it is passed into the texture function. GLSL provides a function that automatically does the above (and I assume does not cause a dependent texture read):
vec4 textureColor = texture2DProj(u_texture, v_texCoords); //first two components automatically divided by last component
这篇关于在 LibGDX 中实现梯形精灵的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持跟版网!