<?xml version="1.0" ?>
<rss version="2.0">
<channel>
  <title>Journal of Computer Graphics Techniques</title>
  <link>http://jcgt.org</link>
  <description>The Journal of Computer Graphics Techniques (JCGT) is a diamond open access journal: peer reviewed and free to readers and authors. Its focus is short articles on computer graphics practice. JCGT aims to serve the community by disseminating proven and reliable techniques in all areas of computer graphics, including software, hardware, games, and interaction. Alongside the journal papers themselves, JCGT publishes and archives open-source source code that authors provide.</description>
  
  <pubDate>2026-02-05T05:00:00.000Z</pubDate>
  <item>
    <title><![CDATA[JCGT paper v14n1, Pages 116-139b - UPDATE]]></title>
    <link>http://jcgt.org/news/2026-02-05-EON-update.html</link>
    <guid>http://jcgt.org/news/2026-02-05-EON-update.html</guid>
    <category>News</category>
    <pubDate>2026-02-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        <p>The 2025 paper "<a href="https://jcgt.org/published/0014/01/06/">EON: A practical energy-preserving rough diffuse BRDF</a>â€� was updated with an appendix that provides an albedo inversion formula, as well as two other small enhancements.</p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[Optimizing spatiotemporal variance-guided filtering for modern GPU architectures]]></title>
    <link>http://jcgt.org/published/0015/01/02</link>
    <guid>http://jcgt.org/published/0015/01/02</guid>
    <category>Paper</category>
    <pubDate>2026-01-12</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0015/01/02/icon.png">]]><![CDATA[<p>Rostyslav Pikulsky, Optimizing spatiotemporal variance-guided filtering for modern GPU architectures, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 16, no. 1, 20-43, 2026</p>]]><![CDATA[<p>Path tracing is generally considered one of the most promising real-time rendering solutions. One of the major problems of its real-time application is large variance of the outputs, which is a result of the low number of Monte Carlo samples being processed due to execution time constraints. In response to that, real-time path tracing reconstruction (denoising) has become an important research direction. Spatiotemporal variance-guided filtering (SVGF) is one of the state-of-the-art real-time denoising techniques that serves as a basis for a lot of production solutions. We present a set of optimization techniques for the à-trous wavelet transform of SVGF. They achieve execution time reduction without any filtering quality degradation by optimizing memory access efficiency based on modern GPU architecture specifics. Compared to the base SVGF implementation, a 1.3&ndash;2.5x performance gain is achieved in different testing scenarios. Proposed techniques can be implemented for different hardware targets and usage scenarios and do not pose limitations to filter configuration.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Practically Utilizing Neural Networks in CPU-based Production Rendering]]></title>
    <link>http://jcgt.org/published/0015/01/01</link>
    <guid>http://jcgt.org/published/0015/01/01</guid>
    <category>Paper</category>
    <pubDate>2026-01-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0015/01/01/icon.png">]]><![CDATA[<p>Steve Bako, Ryusuke Villemin,  and Magnus Wrenninge, Practically Utilizing Neural Networks in CPU-based Production Rendering, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 16, no. 1, 1-19, 2026</p>]]><![CDATA[<p>Visual Effects (VFX) pipelines struggle to integrate machine learning (ML) within the rendering loop to reduce computation costs and enhance image quality. Namely, most offline path tracers are exclusively on CPU and rely on numerous low-latency operations that are not conducive to typical GPU-based inference workflows. While CPU-based network inference is not new, we demonstrate how lightweight networks can be practically leveraged in such low-latency environments. As a proof of concept, we deployed multiple networks into our own production pipeline targeting two separate processes to enable significant improvements in memory, speed, and quality during rendering, while operating within these constraints.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Neural Visibility Cache for Real-Time Light Sampling]]></title>
    <link>http://jcgt.org/published/0014/02/01</link>
    <guid>http://jcgt.org/published/0014/02/01</guid>
    <category>Paper</category>
    <pubDate>2025-08-18</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/02/01/icon.png">]]><![CDATA[<p>Jakub Bok&scaron;ansk&yacute; and Daniel Meister, Neural Visibility Cache for Real-Time Light Sampling, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 2, 1-19, 2025</p>]]><![CDATA[<p>Direct illumination with many lights is an inherent component of physically-based rendering, remaining challenging, especially in real-time scenarios. We propose an online-trained neural cache that stores visibility between lights and 3D positions. We feed light visibility to weighted reservoir sampling (WRS) to sample a light source. The cache is implemented as a fully-fused multilayer perceptron (MLP) with multi-resolution hash-grid encoding, enabling online training and efficient inference on modern GPUs in real-time frame rates. The cache can be seamlessly integrated into existing rendering frameworks and can be used in combination with other real-time techniques such as spatiotemporal reservoir sampling (ReSTIR).</p>]]></description>
  </item>
  <item>
    <title><![CDATA[JCGT leadership]]></title>
    <link>http://jcgt.org/news/2025-07-20-NewEIC.html</link>
    <guid>http://jcgt.org/news/2025-07-20-NewEIC.html</guid>
    <category>News</category>
    <pubDate>2025-07-20</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        <p>As previously announced, <a href="https://cgg.mff.cuni.cz/~wilkie/Website/Home.html">Alexander Wilkie</a> has now assumed duties as Editor in Chief of the Journal of Computer Graphics Techniques. Interim Editor in Chief <a href="https://erich.realtimerendering.com/">Eric Haines</a> is continuing on the JCGT advisory board, and the previous Editor in Chief <a href="https://www.umbc.edu/~olano">Marc Olano</a> is contiuing in the role of Managing Editor and member of the Advisory Board.</p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[Higher Order Continuity for Smooth As-Rigid-As-Possible Shape Modeling]]></title>
    <link>http://jcgt.org/published/0014/01/10</link>
    <guid>http://jcgt.org/published/0014/01/10</guid>
    <category>Paper</category>
    <pubDate>2025-06-06</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/10/icon.png">]]><![CDATA[<p>Annika Oehri, Philipp Herholz,  and Olga Sorkine-Hornung, Higher Order Continuity for Smooth As-Rigid-As-Possible Shape Modeling, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 198-215, 2025</p>]]><![CDATA[<p>We propose a modification of the As-Rigid-As-Possible (ARAP) mesh deformation energy with higher-order smoothness, which overcomes a prominent limitation of the original ARAP formulation: spikes and lack of continuity at the manipulation handles. Our method avoids spikes even when using single-point positional constraints. Since no explicit rotations have to be specified, the user interaction can be realized through a simple click-and-drag interface, where points on the mesh can be selected and moved around while the rest of the mesh surface automatically deforms accordingly. Our method preserves the benefits of ARAP deformations: it is easy to implement and thus useful for practical applications, while its efficiency makes it usable in real-time, interactive scenarios on detailed models. </p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Zero-Level Set Preserving Technique for Signed Distance Function Computation from an Implicit Surface]]></title>
    <link>http://jcgt.org/published/0014/01/09</link>
    <guid>http://jcgt.org/published/0014/01/09</guid>
    <category>Paper</category>
    <pubDate>2025-05-16</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/09/icon.png">]]><![CDATA[<p>Pierre-Alain Fayolle, A Zero-Level Set Preserving Technique for Signed Distance Function Computation from an Implicit Surface, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 185-197, 2025</p>]]><![CDATA[<p>We describe in this short article a technique to convert an implicit surface into a signed distance function (SDF) while exactly preserving the zero-level set of the implicit surface. The proposed approach relies on embedding the input implicit surface in the final layer of a neural network by multiplying this neural network by the implicit surface function or a smoothed version of its sign function. The parameters of the corresponding function are then trained to minimize a loss function characterizing the SDF, either exactly by a term corresponding to an eikonal equation or approximately by a term involving the <i>p</i>-Laplace operator.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Importance-Sampled Filter-Adapted Spatio-Temporal Sampling]]></title>
    <link>http://jcgt.org/published/0014/01/08</link>
    <guid>http://jcgt.org/published/0014/01/08</guid>
    <category>Paper</category>
    <pubDate>2025-05-09</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/08/icon.png">]]><![CDATA[<p>Alan Wolfe, William Donnelly,  and Henrik Hal&eacute;n, Importance-Sampled Filter-Adapted Spatio-Temporal Sampling, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 174-184, 2025</p>]]><![CDATA[<p>Stochastic rendering can be improved using sample patterns with good spectral properties, leading to perceptually pleasing error and faster convergence. Both spatiotemporal blue noise masks (STBN) and filter-adapted spatiotemporal sampling (FAST) generate 3D volume textures of samples (2D for space + 1D for time) supporting simple nonuniform distributions such as the commonly used cosine-weighted hemisphere. Here, we extend FAST noise to general distributions, enabling the performance and image quality benefits of exact importance sampling. We demonstrate the utility of this work in both real-time and offline rendering, focusing on the specific example of depth of field with complex bokeh shapes.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Farthest sampling segmentation of triangulated surfaces]]></title>
    <link>http://jcgt.org/published/0014/01/07</link>
    <guid>http://jcgt.org/published/0014/01/07</guid>
    <category>Paper</category>
    <pubDate>2025-04-02</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/07/icon.png">]]><![CDATA[<p>V. Hernandez Mederos, D. Mart&iacute;nez, J. Estrada Sarlabous,  and V. Guerra Ones, Farthest sampling segmentation of triangulated surfaces, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 140-173, 2025</p>]]><![CDATA[<p>In this paper we present <i>Farthest Sampling Segmentation</i> (FSS), a new method for the segmentation of a triangulated surface with n faces into patches. The method is based on the selection of a sample composed by k faces of the triangulation, with k &Lt; n. These faces are chosen by farthest point sampling with respect to a given metric. Pairwise face distances among the faces of the triangulation and the faces in the sample are used to compute an n &times; k affinity matrix W<sup>k</sup>. Rows of W<sup>k</sup> encode the similarity among all triangles and the sample. The segmentation is obtained by applying the clustering algorithm <i>k-means++</i> to the rows of W<sup>k</sup>. Several theoretical results that support the success of FSS are presented. We explain the relation between FSS and spectral segmentation methods. Moreover, it is shown that FSS is coherent and stable. An extensive numerical experimentation is included, with several metrics and a large variety of 3D triangular meshes.  The quality of the segmentations is measured in terms of Rand and Jaccard distances between FSS and ground-truth segmentations. The results show that always connected clusters are produced and that segmentations obtained when k is less than 10% of n are as good as those obtained using the full affinity matrix. FSS has several advantages. It does not depend on parameters to be tuned by hand and is very flexible, since it can handle any metric. Moreover, it is a very cheap method, with a computational cost of O(knm), where m is the cost to evaluate a metric between two faces of the triangulation.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[EON: A practical energy-preserving rough diffuse BRDF]]></title>
    <link>http://jcgt.org/published/0014/01/06</link>
    <guid>http://jcgt.org/published/0014/01/06</guid>
    <category>Paper</category>
    <pubDate>2025-03-31</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/06/icon.png">]]><![CDATA[<p>Jamie Portsmouth, Peter Kutz,  and Stephen Hill, EON: A practical energy-preserving rough diffuse BRDF, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 116-139b, 2025</p>]]><![CDATA[<p>We introduce the <i>energy-preserving Oren--Nayar</i> (EON) model for reflection from rough surfaces. Unlike the popular qualitative Oren--Nayar model (QON) and its variants, our model is energy preserving via analytical energy compensation. We include self-contained GLSL source code for efficient evaluation of the new model and importance sampling based on a novel technique we term <i>Clipped Linearly Transformed Cosine</i> (CLTC) sampling.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Arc Blanc: a real time ocean simulation framework]]></title>
    <link>http://jcgt.org/published/0014/01/05</link>
    <guid>http://jcgt.org/published/0014/01/05</guid>
    <category>Paper</category>
    <pubDate>2025-03-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/05/icon.png">]]><![CDATA[<p>David Algis, B&eacute;renger Bramas, Emmanuelle Darles,  and Lilian Aveneau, Arc Blanc: a real time ocean simulation framework, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 70-115, 2025</p>]]><![CDATA[<p>The oceans cover the vast majority of the Earth. Therefore, their simulation has many scientific, industrial and military interests, including computer graphics domain. By fully exploiting the multi-threading power of GPU and CPU, current state-of-the-art tools can achieve real-time ocean simulation, even if it is sometimes needed to reduce the physical realism for large scenes. Although most of the building blocks for implementing an ocean simulator are described in the literature, a clear explanation of how they interconnect is lacking. Hence, this paper proposes to bring all these components together, detailing all their interactions, in a comprehensive and fully described real-time framework that simulates the free ocean surface and the coupling between solids and fluid. This article also presents several improvements to enhance the physical realism of our model. The two main ones are: calculating the real-time velocity of ocean fluids at any depth; computing the input of the solid to fluid coupling algorithm.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Compression of Spectral Images using Spectral JPEG XL]]></title>
    <link>http://jcgt.org/published/0014/01/04</link>
    <guid>http://jcgt.org/published/0014/01/04</guid>
    <category>Paper</category>
    <pubDate>2025-03-04</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/04/icon.png">]]><![CDATA[<p>Alban Fichet and Christoph Peters, Compression of Spectral Images using Spectral JPEG XL, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 49-69, 2025</p>]]><![CDATA[<p>The advantages of spectral rendering are increasingly well known, and corresponding rendering algorithms have matured. In this context, spectral images are used as input (e.g., reflectance and emission textures) and output of a renderer. Their large memory footprint is one of the big remaining issues with spectral rendering. Our method applies a cosine transform in the wavelength domain. We then reduce the dynamic range of higher-frequency Fourier coefficients by dividing them by the mean brightness, i.e., the Fourier coefficient for frequency zero. Then we store all coefficient images using JPEG XL. The mean brightness is perceptually most important and we store it with high quality. At higher frequencies, we use higher compression ratios and optionally lower resolutions. Our format supports the full feature set of spectral OpenEXR, but compared to this lossless compression, we achieve file sizes that are 10 to 60 times smaller than their ZIP compressed counterparts.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Stack-Free Traversal Algorithm for Left-Balanced k-d Trees]]></title>
    <link>http://jcgt.org/published/0014/01/03</link>
    <guid>http://jcgt.org/published/0014/01/03</guid>
    <category>Paper</category>
    <pubDate>2025-02-25</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/03/icon.png">]]><![CDATA[<p>Ingo Wald, A Stack-Free Traversal Algorithm for Left-Balanced k-d Trees, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 40-48, 2025</p>]]><![CDATA[<p>We present an algorithm that allows for find-closest-point and k-nearest-neighbor (kNN) style traversals of left-balanced k-d trees, without the need for either recursion or software-managed stacks; instead using only current and last previously traversed node to compute which node to traverse next.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[GPU Friendly Laplacian Texture Blending]]></title>
    <link>http://jcgt.org/published/0014/01/02</link>
    <guid>http://jcgt.org/published/0014/01/02</guid>
    <category>Paper</category>
    <pubDate>2025-02-19</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/02/icon.png">]]><![CDATA[<p>Bartlomiej Wronski, GPU Friendly Laplacian Texture Blending, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 21-39, 2025</p>]]><![CDATA[<p>Texture and material blending is one of the leading methods for adding variety to rendered virtual worlds, creating composite materials, and generating procedural content. When done naively, it can introduce either visible seams or contrast loss, leading to an unnatural look not representative of blended textures. Earlier work proposed addressing this problem through careful manual parameter tuning, lengthy per-texture statistics precomputation, look-up tables, or training deep neural networks. In this work, we propose an alternative approach based on insights from image processing and Laplacian pyramid blending. Our approach does not require any precomputation or increased memory usage (other than the presence of a regular, non-Laplacian, texture mipmap chain), does not produce ghosting, preserves sharp local features, and can run in real time on the GPU at the cost of a few additional lower mipmap texture taps.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Chromaticity preserving analytic approximations to the CIE color matching functions]]></title>
    <link>http://jcgt.org/published/0014/01/01</link>
    <guid>http://jcgt.org/published/0014/01/01</guid>
    <category>Paper</category>
    <pubDate>2025-02-10</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0014/01/01/icon.png">]]><![CDATA[<p>Thomas Puls, Chromaticity preserving analytic approximations to the CIE color matching functions, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 15, no. 1, 1-20, 2025</p>]]><![CDATA[<p>Previous attempts to approximate the tabulated CIE color matching functions used least-square-fit methods applied to the functions' absolute values. However, these approximations do not preserve chromaticity well when the color matching functions fade out near the wavelength limits of human vision.</p> <p>Additionally, the analytical functions used to date were mostly defined piecewise, for example asymmetrical halves of a Gaussian normal distribution, resulting in discontinuities in higher-order derivatives.</p> <p>The method of approximation by analytical functions presented in this article accurately preserves chromaticity and is infinitely differentiable. It is applied to the industry standard CIE 1931 2&deg; Standard Observer. With the latter, the root-mean-square (RMS) error in the color matching functions is below 0.4% and below 0.0019 in the respective chromaticity values. Computing all three color matching functions requires only a total of ten calls of the exponential function (or just nine at the expense of accuracy), and no other functions are required.</p> <p>We also applied the method to the more modern CIE 170-1:2006 10&deg; Standard Observer. The source code for this standard is part of the supplementary material.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Efficient Motion Blurred Spheres Using Texture Mapping]]></title>
    <link>http://jcgt.org/published/0013/02/01</link>
    <guid>http://jcgt.org/published/0013/02/01</guid>
    <category>Paper</category>
    <pubDate>2024-11-20</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0013/02/01/icon.png">]]><![CDATA[<p>Nelson Max, Efficient Motion Blurred Spheres Using Texture Mapping, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 14, no. 3, 1-10, 2024</p>]]><![CDATA[<p>This paper presents an efficient 2.5D motion blur algorithm for spheres with shading and highlights, using a single RGBA texture containing blurred spheres of varying light direction and varying blur lengths up to the diameter of the sphere. Longer-length blurs are produced by stretching the center portion of one of the longest blurs in the texture, and dimming appropriately. An application is given to a chemical kinetics visualization.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Incoming Editor in Chief]]></title>
    <link>http://jcgt.org/news/2024-11-11-NewEIC.html</link>
    <guid>http://jcgt.org/news/2024-11-11-NewEIC.html</guid>
    <category>News</category>
    <pubDate>2024-11-11</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        <p>We are proud to announce that <a href="https://cgg.mff.cuni.cz/~wilkie/Website/Home.html">Alexander Wilkie</a> will taking over as Editor in Chief of the Journal of Computer Graphics Techniques in summer 2025. In the interim, <a href="https://erich.realtimerendering.com/">Eric Haines</a> will be acting Editor in Chief. Our long-time Editor in Chief <a href="https://www.umbc.edu/~olano">Marc Olano</a> will be joining the Advisory Board.</p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[A Practical Real-Time Model for Diffraction on Rough Surfaces]]></title>
    <link>http://jcgt.org/published/0013/01/01</link>
    <guid>http://jcgt.org/published/0013/01/01</guid>
    <category>Paper</category>
    <pubDate>2024-06-26</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0013/01/01/icon.png">]]><![CDATA[<p>Olaf Clausen, Martin Mi&scaron;iak, Arnulph Fuhrmann, Ricardo Marroquim,  and Marc Erich Latoschik, A Practical Real-Time Model for Diffraction on Rough Surfaces, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 14, no. 2, 1-27, 2024</p>]]><![CDATA[<p>Wave optics phenomena have a significant impact on the visual appearance of rough conductive surfaces even when illuminated with partially coherent light. Recent models address these phenomena, but none is real-time capable due to the complexity of the underlying physics equations. We provide a practical real-time model, building on the measurements and model by Clausen et al. [2023], that approximates diffraction-induced wavelength shifts and speckle patterns with only a small computational overhead compared to the popular Cook-Torrance GGX model. Our model is suitable for Virtual Reality applications, as it contains domain-specific improvements to address the issues of aliasing and highlight disparity.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Performance Comparison of Meshlet Generation Strategies]]></title>
    <link>http://jcgt.org/published/0012/02/01</link>
    <guid>http://jcgt.org/published/0012/02/01</guid>
    <category>Paper</category>
    <pubDate>2023-12-08</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0012/02/01/icon.png">]]><![CDATA[<p>Mark Bo Jensen, Jeppe Revall Frisvad,  and J. Andreas B&aelig;rentzen, Performance Comparison of Meshlet Generation Strategies, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 13, no. 3, 1-27, 2023</p>]]><![CDATA[<p>Mesh shaders were recently introduced for faster rendering of triangle meshes. Instead of pushing each individual triangle through the rasterization pipeline, we can create triangle clusters called meshlets and perform per-cluster culling operations. This is a great opportunity to efficiently render very large meshes. However, the performance of mesh shaders depends on how we create the meshlets. We tested rendering performance, on NVIDIA hardware, after the use of different methods for organizing triangle meshes into meshlets. To measure the performance of a method, we rendered meshes of different complexity from many randomly selected views and measured the render time per triangle. Based on our findings, we suggest guidelines for creation of meshlets. Using our guidelines we propose two simple methods for generating meshlets that result in good rendering performance, when combined with hardware manufactures best practices. Our objective is to make it easier for the graphics practitioner to organize a triangle mesh into high-performance meshlets. To support this we have uploaded our code to <a href="https://github.com/Senbyo/meshletmaker">github.com/Senbyo/meshletmaker</a>.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[The Energy Distance as a Replacement for the Reconstruction Loss in Conditional GANs]]></title>
    <link>http://jcgt.org/published/0012/01/02</link>
    <guid>http://jcgt.org/published/0012/01/02</guid>
    <category>Paper</category>
    <pubDate>2023-01-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0012/01/02/icon.png">]]><![CDATA[<p>Eric Heitz and Thomas Chambon, The Energy Distance as a Replacement for the Reconstruction Loss in Conditional GANs, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 13, no. 2, 29-48, 2023</p>]]><![CDATA[<p>Conditional adversarial generative networks (cGANs) are often trained with a reconstruction loss in addition to the adversarial loss to compensate for its instability. However, reconstruction losses are known to conflict with the adversarial objective and prevent generating diverse outputs (mode collapse). This problem is acknowledged by the community but is either ignored or addressed with sophisticated approaches. We promote a surprisingly simple and yet unconsidered alternative: replacing the reconstruction loss by the energy distance. With a minor implementation modification, it solves the conflict problem with the adversarial objective, prevents mode collapse, and produces high-quality results in several image-processing tasks.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Accelerated Photon Mapping for Hardware-based Ray Tracing]]></title>
    <link>http://jcgt.org/published/0012/01/01</link>
    <guid>http://jcgt.org/published/0012/01/01</guid>
    <category>Paper</category>
    <pubDate>2023-01-26</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0012/01/01/icon.png">]]><![CDATA[<p>Ren&eacute; Kern, Felix Br&uuml;ll,  and Thorsten Grosch, Accelerated Photon Mapping for Hardware-based Ray Tracing, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 13, no. 2, 1-28, 2023</p>]]><![CDATA[<p>Photon mapping is a numerical solution to the rendering equation based on tracing random rays. It works by first tracing the paths of the emitted light (photons) and storing them in a data structure to then collect them later from the perspective of the camera. Especially, the collection step is difficult to realize on the GPU due to its sequential nature. We present an implementation of a progressive photon mapper for ray tracing hardware (RTPM) based on a combination of existing techniques. Additionally, we present two small novel techniques that speed up RTPM even further by reducing the number of photons stored and evaluated. We demonstrate that RTPM outperforms existing hash-based photon mappers, especially on large and complex scenes.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Performance Comparison of Bounding Volume Hierarchies for GPU Ray Tracing]]></title>
    <link>http://jcgt.org/published/0011/04/01</link>
    <guid>http://jcgt.org/published/0011/04/01</guid>
    <category>Paper</category>
    <pubDate>2022-10-18</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/04/01/icon.png">]]><![CDATA[<p>Daniel Meister and Ji&rcaron;&iacute; Bittner, Performance Comparison of Bounding Volume Hierarchies for GPU Ray Tracing, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 5, 1-19, 2022</p>]]><![CDATA[<p>Ray tracing is an inherent component of modern rendering algorithms. The bounding volume hierarchy (BVH) is a commonly used acceleration data structure employed in most ray tracing frameworks. Through the last decade, many methods addressing ray tracing with bounding volume hierarchies have been proposed. However, most of these methods were typically compared to only one or two reference methods. The acceleration provided by a particular BVH depends heavily on its construction algorithm. Even experts in the field dispute which method is the best. To add more insight into this challenge, we empirically compare the most popular methods addressing BVHs in the context of GPU ray tracing. Moreover, we combine the construction algorithms with other enhancements such as spatial splits, ray reordering, and wide BVHs. To estimate how close we are from the best performing BVH, we propose a novel method using global optimization with simulated annealing and combine it with the existing best-performing BVH builders. For the sake of fairness and consistency, all methods are evaluated in a single unified framework, and we make all source code publicly available. We also study the correlation between tracing times and the estimated traversal cost induced by the surface area heuristic (SAH).</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Ray Tracing of Signed Distance Function Grids]]></title>
    <link>http://jcgt.org/published/0011/03/06</link>
    <guid>http://jcgt.org/published/0011/03/06</guid>
    <category>Paper</category>
    <pubDate>2022-09-21</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/03/06/icon.png">]]><![CDATA[<p>Herman Hansson-S&ouml;derlund, Alex Evans,  and Tomas Akenine-M&ouml;ller, Ray Tracing of Signed Distance Function Grids, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 4, 94-113, 2022</p>]]><![CDATA[<p>We evaluate the performance of a wide set of combinations of traversal and voxel intersection testing of signed distance function grids in a path tracing setting. In addition, we present an optimized way to compute the intersection between a ray and the surface defined by trilinear interpolation of signed distances at the eight corners of a voxel. We also provide a novel way to compute continuous normals across voxels and an optimization for shadow rays. On an NVIDIA RTX 3090, the fastest method uses the GPU's ray tracing hardware to trace against a bounding volume hierarchy, built around all non-empty voxels, and then applies either an analytic cubic solver or a repeated linear interpolation for voxel intersection.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Practical Real-Time Hex-Tiling]]></title>
    <link>http://jcgt.org/published/0011/03/05</link>
    <guid>http://jcgt.org/published/0011/03/05</guid>
    <category>Paper</category>
    <pubDate>2022-08-25</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/03/05/icon.png">]]><![CDATA[<p>Morten S. Mikkelsen, Practical Real-Time Hex-Tiling, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 4, 77-94, 2022</p>]]><![CDATA[<p>To provide a convenient, easy-to-adopt approach to randomly tiled textures in the context of real-time graphics, we propose an adaptation of the by-example noise algorithm of Heitz and Neyret. The original method preserves contrast using a histogram-preserving method that requires a precomputation step to convert the source texture into a transform and inverse transform texture, which must both be sampled in the shader rather than the original source texture. Thus deep integration into the application is required for this to appear opaque to the author of the shader and material. In our adaptation we omit histogram preservation and replace it with a novel blending method that allows us to sample the original source texture. This omission is particularly sensible for a normal map as it represents the partial derivatives of a height map. In order to diffuse the transition between hex tiles, we introduce a simple metric to adjust the blending weights. For a texture of color, we reduce loss of contrast by applying a contrast function directly to the blending weights. Though our method works for color, we emphasize the use case of normal maps in our work because non-repetitive noise is ideal for mimicking surface detail by perturbing normals.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Barycentric Quad Rasterization]]></title>
    <link>http://jcgt.org/published/0011/03/04</link>
    <guid>http://jcgt.org/published/0011/03/04</guid>
    <category>Paper</category>
    <pubDate>2022-08-24</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/03/04/icon.png">]]><![CDATA[<p>Jules Bloomenthal, Barycentric Quad Rasterization, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 4, 65-76, 2022</p>]]><![CDATA[<p>When a quadrilateral is rendered as two triangles, a C<sup>1</sup> discontinuity can occur along the dividing diagonal. In 2004, Hormann and Tarini used generalized barycentric coordinates to eliminate the discontinuity. The present paper provides an implementation using a geometry shader, unavailable in 2004, and provides additional examples of the barycentric technique.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Lightweight Multidimensional Adaptive Sampling for GPU Ray Tracing]]></title>
    <link>http://jcgt.org/published/0011/03/03</link>
    <guid>http://jcgt.org/published/0011/03/03</guid>
    <category>Paper</category>
    <pubDate>2022-08-15</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/03/03/icon.png">]]><![CDATA[<p>Daniel Meister and Toshiya Hachisuka, Lightweight Multidimensional Adaptive Sampling for GPU Ray Tracing, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 4, 43-64, 2022</p>]]><![CDATA[<p>Rendering typically deals with integrating multidimensional functions that are usually solved through numerical integration, such as Monte Carlo or quasi-Monte Carlo. Multidimensional adaptive sampling [Hachisuka et al. 2008] is a technique that can significantly reduce the error by placing samples into locations of rapid changes. However, no efficient parallel version is available, limiting its practical utility in interactive and real-time applications. We reformulate the algorithm by exploiting the fact that different locations can be sampled in parallel to be suitable for modern GPU architectures. We start by placing a fixed number of initial samples in each cell of a uniform grid, where these initial samples are subsequently used for error estimation to adaptively refine the cells with higher error. We implemented our algorithm in CUDA and evaluated it in the context of hardware-accelerated ray tracing via OptiX within various scenarios, including distribution ray tracing effects such as motion blur, depth of field, direct lighting with an area light source, and indirect illumination. The results show that our method achieves error reduction by up to 83% within a given time budget that comprises only fractions of a second.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Kinematic Timing Curves: Cartoon Physics with Ease]]></title>
    <link>http://jcgt.org/published/0011/03/02</link>
    <guid>http://jcgt.org/published/0011/03/02</guid>
    <category>Paper</category>
    <pubDate>2022-07-20</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/03/02/icon.png">]]><![CDATA[<p>Arnie Cachelin, Kinematic Timing Curves: Cartoon Physics with Ease, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 4, 22-42, 2022</p>]]><![CDATA[<p>We analyze animators' &quot;cartoon physics&quot; and apply appropriate constraints and boundary conditions to Newton's second law, to derive a simple model with intuitive parameters that reproduces the effects of inertia, anticipation, and overshoot commonly found in both traditional animation and modern interactive graphics. Our model takes the form of a normalized timing curve that is computationally efficient and should be easy to add to systems for producing motion graphics, mechanical visualization, and user interface animations. In its simplest form, the model reproduces a standard smooth step curve, but with an alternate formulation derived from basic kinematics.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Transformation Constraints Using Approximate Spherical Regression]]></title>
    <link>http://jcgt.org/published/0011/03/01</link>
    <guid>http://jcgt.org/published/0011/03/01</guid>
    <category>Paper</category>
    <pubDate>2022-07-08</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/03/01/icon.png">]]><![CDATA[<p>Tomohiko Mukai, Transformation Constraints Using Approximate Spherical Regression, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 4, 1-21, 2022</p>]]><![CDATA[<p>Transformation constraint is a standard tool to control a transformation of a 3D object according to other transformations. For example, a character rig in real-time applications is often built using transformation constraints to control many skeletal joints with fewer handles simultaneously. However, conventional example-based methods suffer from artifacts such as flipping due to the complexity of 3D transformation, especially for rotation-rotation constraints. We propose an approximate regression scheme for data-driven transformation constraints. Our method uses a spherical basis function interpolation technique whose computations are linearly approximated in Lie algebra. Nonlinearities and ambiguities in 3D transformation spaces are handled with several extensions such as automated example duplication and regression weight constraints. Our approximate regression scheme provides smooth and predictable interlocking control among multiple transformations in real time.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Fast Marching-Cubes-Style Volume Evaluation for Level Set Surfaces]]></title>
    <link>http://jcgt.org/published/0011/02/02</link>
    <guid>http://jcgt.org/published/0011/02/02</guid>
    <category>Paper</category>
    <pubDate>2022-06-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/02/02/icon.png">]]><![CDATA[<p>Tetsuya Takahashi and Christopher Batty, Fast Marching-Cubes-Style Volume Evaluation for Level Set Surfaces, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 3, 30-45, 2022</p>]]><![CDATA[<p>We present an efficient and accurate volume evaluation method for grid-based level sets that computes the volume of the implicitly represented shape(s) in a manner consistent with Marching Cubes surface reconstruction. We utilize rotational symmetry to combine redundant Marching Cubes cases and avoid explicitly forming local triangulations using efficient volume computation formulae for pyramids and truncated prisms wherever possible, thereby achieving a fast and compact implementation. We demonstrate that our method is more efficient than previous approaches while generating results that converge with second-order accuracy and are perfectly consistent with volumes calculated directly from explicit Marching Cubes meshes. We provide a full C++17 reference implementation to foster adoption of the proposed method (<a href='https://github.com/tetsuya-takahashi/MC-style-vol-eval'>https://github.com/tetsuya-takahashi/MC-style-vol-eval</a>).</p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Dataset and Explorer for 3D Signed Distance Functions]]></title>
    <link>http://jcgt.org/published/0011/02/01</link>
    <guid>http://jcgt.org/published/0011/02/01</guid>
    <category>Paper</category>
    <pubDate>2022-04-27</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/02/01/icon.png">]]><![CDATA[<p>Towaki Takikawa, Andrew Glassner,  and Morgan McGuire, A Dataset and Explorer for 3D Signed Distance Functions, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 3, 1-29, 2022</p>]]><![CDATA[<p>Reference datasets are a key tool in the creation of new algorithms. They allow us to compare different existing solutions and identify problems and weaknesses during the development of new algorithms. The signed distance function (SDF) is enjoying a renewed focus of research activity in computer graphics, but until now there has been no standard reference dataset of such functions. We present a database of 63 curated, optimized, and regularized functions of varying complexity. Our functions are provided as analytic expressions that can be efficiently evaluated on a GPU at any point in space. We also present a viewing and inspection tool and software for producing SDF samples appropriate for both traditional graphics and training neural networks.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Improved Accuracy for Prism-Based Motion Blur]]></title>
    <link>http://jcgt.org/published/0011/01/05</link>
    <guid>http://jcgt.org/published/0011/01/05</guid>
    <category>Paper</category>
    <pubDate>2022-03-04</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/01/05/icon.png">]]><![CDATA[<p>Mads J. L. R&oslash;nnow, Ulf Assarsson, Erik Sintorn,  and Marco Fratarcangeli, Improved Accuracy for Prism-Based Motion Blur, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 2, 82-95, 2022</p>]]><![CDATA[<p>For motion blur of dynamic triangulated objects, it is common to construct a prism-like shape for each triangle, from the linear trajectories of its three edges and the triangle’s start and end position during the delta time step. Such a prism can be intersected with a primary ray to find the time points where the triangle starts and stops covering the pixel center. These intersections are paired into time intervals for the triangle and pixel. Then, all time intervals, potentially from many prisms, are used to aggregate a motion-blurred color contribution to the pixel.</p><p>For real-time rendering purposes, it is common to <i>linearly</i> interpolate the ray-triangle intersection and <i>uv</i> coordinates over the time interval. This approximation often works well, but the true path in 3D and <i>uv</i> space for the ray-triangle intersection, as a function of time, is in general nonlinear.</p><p>In this article, we start by noting that the path of the intersection point can even partially reside outside of the prism volume itself: i.e., the prism volume is not always identical to the volume swept by the triangle. Hence, we must first show that the prisms still work as bounding volumes when finding the time intervals with primary rays, as that may be less obvious when the volumes differ. Second, we show a simple and potentially common class of cases where this happens, such as when a triangle undergoes a wobbling- or swinging-like motion during a time step. Third, when the volumes differ, linear interpolation between two points on the prism surfaces for triangle properties works particularly poorly, which leads to visual artifacts. Therefore, we finally modify a prism-based real-time motion-blur algorithm to use adaptive sampling along the correct paths regarding the triangle location and <i>uv</i> coordinates over which we want to compute a filtered color. Due to being adaptive, the algorithm has a negligible performance penalty on pixels where linear interpolation is sufficient, while being able to significantly improve the visual quality where needed, for a very small additional cost.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Optimizing Kronecker Sequences for Multidimensional Sampling]]></title>
    <link>http://jcgt.org/published/0011/01/04</link>
    <guid>http://jcgt.org/published/0011/01/04</guid>
    <category>Paper</category>
    <pubDate>2022-03-02</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/01/04/icon.png">]]><![CDATA[<p>Mayur Patel, Optimizing Kronecker Sequences for Multidimensional Sampling, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 2, 55-81, 2022</p>]]><![CDATA[<p>We review the use of Kronecker sequences for sampling applications. To apply them to multiple dimensions, we develop and execute a method to find irrational numbers that will produce good-quality results. Finally, we provide empirical evidence that the irrationals we found out-perform those in current use and that they perform respectably against other sample generation techniques.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[SurfaceNets for Multi-Label Segmentations with Preservation of Sharp Boundaries]]></title>
    <link>http://jcgt.org/published/0011/01/03</link>
    <guid>http://jcgt.org/published/0011/01/03</guid>
    <category>Paper</category>
    <pubDate>2022-02-28</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/01/03/icon.png">]]><![CDATA[<p>Sarah F. Frisken, SurfaceNets for Multi-Label Segmentations with Preservation of Sharp Boundaries, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 2, 34-54, 2022</p>]]><![CDATA[<p>We extend 3D SurfaceNets to generate surfaces of segmented 3D medical images composed of multiple materials represented as indexed labels. Our extension generates smooth, high-quality triangle meshes suitable for rendering and tetrahedralization, preserves topology and sharp boundaries between materials, guarantees a user-specified accuracy, and is fast enough that users can interactively explore the trade-off between accuracy and surface smoothness. We provide open-source code in the form of an extendable C++ library with a simple API, and a Qt and OpenGL-based application that allows users to import or randomly generate multi-label volumes to experiment with surface fairing parameters. In this paper, we describe the basic SurfaceNets algorithm, our extension to handle multiple materials, our method for preserving sharp boundaries between materials, and implementation details used to achieve efficient processing.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Tiling simplex noise and flow noise in two and three dimensions]]></title>
    <link>http://jcgt.org/published/0011/01/02</link>
    <guid>http://jcgt.org/published/0011/01/02</guid>
    <category>Paper</category>
    <pubDate>2022-02-22</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/01/02/icon.png">]]><![CDATA[<p>Stefan Gustavson and Ian McEwan, Tiling simplex noise and flow noise in two and three dimensions, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 2, 17-33, 2022</p>]]><![CDATA[<p>Simplex noise is a highly useful procedural primitive, but it is defined in weirdly oriented grids that do not tile easily. We set that straight, literally, by publishing implementations of 2D and 3D tiling simplex noise in GLSL. As an added bonus, our tiling noise functions support the animation technique called <i>flow noise</i>, which we extend further to 3D. At a small additional cost, these functions can compute their exact analytical gradient, and second-order derivatives as well if desired, making them suitable for gradient-dependent applications like bump mapping, analytical antialiasing and particle animations, e.g. with <i>curl noise</i>.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Real-time Shading with Free-form Planar Area Lights using Linearly Transformed Cosines]]></title>
    <link>http://jcgt.org/published/0011/01/01</link>
    <guid>http://jcgt.org/published/0011/01/01</guid>
    <category>Paper</category>
    <pubDate>2022-02-18</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0011/01/01/icon.png">]]><![CDATA[<p>Takahiro Kuge, Tatsuya Yatagawa,  and Shigeo Morishima, Real-time Shading with Free-form Planar Area Lights using Linearly Transformed Cosines, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 12, no. 2, 1-16, 2022</p>]]><![CDATA[<p>This article introduces a simple yet powerful approach to illuminating scenes with free-form planar area lights in real time. For this purpose, we extend a previous method for polygonal area lights in two ways. First, we adaptively approximate the closed boundary curve of the light, by extending the Ramer–Douglas–Peucker algorithm to consider the importance of a given subdivision step to the final shading result. Second, we efficiently clip the light to the upper hemisphere, by algebraically solving a polynomial equation per curve segment. Owing to these contributions, our method is efficient for various light shapes defined by cubic B&eacute;zier curves and achieves a significant performance improvement over the previous method applied to a uniformly discretized boundary curve.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Fast temporal reprojection without motion vectors]]></title>
    <link>http://jcgt.org/published/0010/03/02</link>
    <guid>http://jcgt.org/published/0010/03/02</guid>
    <category>Paper</category>
    <pubDate>2021-09-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0010/03/02/icon.png">]]><![CDATA[<p>Johannes Hanika, Lorenzo Tessari,  and Carsten Dachsbacher, Fast temporal reprojection without motion vectors, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 11, no. 3, 19-45, 2021</p>]]><![CDATA[<p>Rendering realistic graphics often depends on random sampling, increasingly so even for real-time settings. When rendering animations, there is often a surprising amount of information that can be reused between frames.</p><p>This is exploited in numerous rendering algorithms, offline and real-time, by relying on reprojecting samples, for denoising as a post-process or for more time-critical applications such as temporal antialiasing for interactive preview or real-time rendering. Motion vectors are widely used during reprojection to align adjacent frames’ warping based on the input geometry vectors between two time samples. Unfortunately, this is not always possible, as not every pixel may have coherent motion, such as when a glass surface moves: the highlight moves in a different direction than the surface or the object behind the surface. Estimation of true motion vectors is thus only possible for special cases. We devise a fast algorithm to compute dense correspondences in image space to generalize reprojection-based algorithms to scenarios where analytical motion vectors are unavailable and high performance is required. Our key ingredient is an efficient embedding of patch-based correspondence detection into a hierarchical algorithm. We demonstrate the effectiveness and utility of the proposed reprojection technique for three applications: temporal antialiasing, handheld burst photography, and Monte Carlo rendering of animations.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[An OpenEXR Layout for Spectral Images]]></title>
    <link>http://jcgt.org/published/0010/03/01</link>
    <guid>http://jcgt.org/published/0010/03/01</guid>
    <category>Paper</category>
    <pubDate>2021-09-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0010/03/01/icon.png">]]><![CDATA[<p>Alban Fichet, Romain Pacanowski,  and Alexander Wilkie, An OpenEXR Layout for Spectral Images, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 11, no. 3, 1-18, 2021</p>]]><![CDATA[<p>We propose a standardized layout to organize spectral data stored in OpenEXR images. We motivate why we chose the OpenEXR format as the basis for our work, and we explain our choices with regard to data selection and organization: our goal is to define a standard for the exchange of measured or simulated spectral and bi-spectral data. We also provide sample code to store spectral images in OpenEXR format.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[MMPX Style-Preserving Pixel Art Magnification]]></title>
    <link>http://jcgt.org/published/0010/02/04</link>
    <guid>http://jcgt.org/published/0010/02/04</guid>
    <category>Paper</category>
    <pubDate>2021-06-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0010/02/04/icon.png">]]><![CDATA[<p>Morgan McGuire and Mara Gagiu, MMPX Style-Preserving Pixel Art Magnification, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 11, no. 2, 83-117, 2021</p>]]><![CDATA[<p>We present MMPX, an efficient filter for magnifying pixel art, such as 8- and 16-bit era video-game sprites, fonts, and screen images, by a factor of two in each dimension. MMPX preserves art style, attempting to predict what the artist would have produced if working at a larger scale but within the same technical constraints.</p><p>Pixel-art magnification enables the displaying of classic games and new retro-styled ones on modern screens at runtime, provides high-quality scaling and rotation of sprites and raster-font glyphs through precomputation at load time, and accelerates content-creation workflow.</p><p>MMPX reconstructs curves, diagonal lines, and sharp corners while preserving the exact palette, transparency, and single-pixel features. For general pixel art, it can often preserve more aspects of the original art style than previous magnification filters such as nearest-neighbor, bilinear, HQX, XBR, and EPX. In specific cases and applications, other filters will be better. We recommend EPX and base XBR for content with exclusively rounded corners, and HQX and antialiased XBR for content with large palettes, gradients, and antialiasing. MMPX is fast enough on embedded systems to process typical retro 64k-pixel full screens in less than 0.5 ms on a GPU or CPU. We include open source implementations in C++, JavaScript, and OpenGL ES GLSL for our method and several others.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Polarizing Filter Function for Real-Time Rendering]]></title>
    <link>http://jcgt.org/published/0010/02/03</link>
    <guid>http://jcgt.org/published/0010/02/03</guid>
    <category>Paper</category>
    <pubDate>2021-06-17</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0010/02/03/icon.png">]]><![CDATA[<p>Viktor Enfeldt and Prashant Goswami, A Polarizing Filter Function for Real-Time Rendering, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 11, no. 2, 59-82, 2021</p>]]><![CDATA[<p>We present a function that can be used in conventional non-polarizing renderers to simulate the reflection-altering visual effects of real polarizing filters, without having to replace the existing light and surface representations. The relevant Stokes-Mueller polarization calculations are simplified so that neither Stokes vectors nor Mueller matrices are needed in the finished implementation. Our function approximates the surface’s complex refractive index with its specular color, and the accuracy of this approximation is demonstrated with some common conductor materials; no approximation needs to be made for dielectric materials. As our function only affects specularly reflected light, it cannot simulate all the visual effects produced by real polarizing filters, only the reflection-reducing ones. We show the visual effects of our filter function and measure its execution time in a real-time rendered application. The function’s correctness is verified by comparing it with a filter implemented in a polarizing offline renderer. HLSL source code is provided for the real-time implementation.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Stable Geometric Specular Antialiasing with Projected-Space NDF Filtering]]></title>
    <link>http://jcgt.org/published/0010/02/02</link>
    <guid>http://jcgt.org/published/0010/02/02</guid>
    <category>Paper</category>
    <pubDate>2021-05-20</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0010/02/02/icon.png">]]><![CDATA[<p>Yusuke Tokuyoshi and Anton S. Kaplanyan, Stable Geometric Specular Antialiasing with Projected-Space NDF Filtering, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 11, no. 2, 31-58, 2021</p>]]><![CDATA[<p>Shading filtering proposed by Kaplanyan et al. is a simple solution for specular aliasing. It filters a distribution of microfacet normals in the domain of microfacet slopes by estimating the filtering kernel using derivatives of a halfway vector between incident and outgoing directions. However, for real-time rendering, this approach can produce noticeable artifacts because of an estimation error of derivatives. For forward rendering, this estimation error is increased significantly at grazing angles and near edges. The present work improves the quality of the original technique, while decreasing the complexity of the code at the same time. To reduce the error, we introduce a filtering method in the domain of orthographically projected microfacet normals and a practical approximation of this filtering method. In addition, we optimize the calculation of an isotropic filter kernel used for deferred rendering by applying the proposed projected-space filtering. As our implementation is simpler than the original method, it is easier to integrate in time-sensitive applications, such as game engines, while at the same time improving the filtering quality.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Scaling Probe-Based Real-Time Dynamic Global Illumination for Production]]></title>
    <link>http://jcgt.org/published/0010/02/01</link>
    <guid>http://jcgt.org/published/0010/02/01</guid>
    <category>Paper</category>
    <pubDate>2021-05-03</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0010/02/01/icon.png">]]><![CDATA[<p>Zander Majercik, Adam Marrs, Josef Spjut,  and Morgan McGuire, Scaling Probe-Based Real-Time Dynamic Global Illumination for Production, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 11, no. 2, 1-29, 2021</p>]]><![CDATA[<p>We contribute several practical extensions to the probe-based irradiance-field-with-visibility representation [Majercik et al. 2019] [McGuire et al. 2017] to improve image quality, constant and asymptotic performance, memory efficiency, and artist control. We developed these extensions in the process of incorporating the previous work into the global illumination solutions of the NVIDIA RTXGI SDK, the Unity and Unreal Engine 4 game engines, and proprietary engines for several commercial games. These extensions include: an intuitive tuning parameter (the &quot;self-shadow&quot; bias); heuristics to speed transitions in the global illumination; reuse of irradiance data as prefiltered radiance for recursive glossy reflection; a probe state machine to prune work that will not affect the final image; and multiresolution cascaded volumes for large worlds.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[New JCGT paper submission form]]></title>
    <link>http://jcgt.org/news/2021-04-29-SubmissionForm.html</link>
    <guid>http://jcgt.org/news/2021-04-29-SubmissionForm.html</guid>
    <category>News</category>
    <pubDate>2021-04-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        <p>For ten years, the JCGT submission process has consisted of sending your paper to the editor in chief in an email. In this time, we've had a handful of instances where a submission was lost, occasionally to overactive spam filtering, and more often due to email size limits (which, distressingly, will quietly drop the email in question without notifying sender or receiver). We're pleased to announce a new online system for submission. New paper submissions should complete the new <a href="http://jcgt.org/submit.html">submission form</a> with the contact author's email, paper title, and abstract. After completing the form, you will be contacted by the Editor in Chief with a link to upload your paper and any supplemental material.</p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[JCGT papers to be presented at I3D]]></title>
    <link>http://jcgt.org/news/2021-04-02-JCGTatI3D.html</link>
    <guid>http://jcgt.org/news/2021-04-02-JCGTatI3D.html</guid>
    <category>News</category>
    <pubDate>2021-04-02</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        <p>As part of the continued relationship between JCGT and the <a href="http://i3dsymposium.github.io/">ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games</a>, papers published in JCGT have the option to be presented by the authors at the annual I3D conference. This year, the following JCGT papers will have presentations during I3D (April 20–22, 2021):</p>
        <ul>
          <li>Tue 20 Apr: Mark Jarzynski and Marc Olano, <a href="http://www.jcgt.org/published/0009/03/02/">Hash Functions for GPU Rendering</a></li>
          <li>Tue 20 Apr: Alisa Jung, Johannes Hanika, and Carsten Dachsbacher, <a href="http://www.jcgt.org/published/0009/02/01/">Detecting Bias in Monte Carlo Renderers using Welchâ€™s t-test</a></li>
          <li>Thu 22 Apr: Tsukasa Fukusato, Seung-Tak Noh, Takeo Igarashi, and Daichi Ito, <a href="http://jcgt.org/published/0009/03/03/">Interactive Meshing of User-Defined Point Sets</a></li>
          <li>Thu 22 Apr: Sylvain Rousseau and Tamy Boubekeur, <a href="http://www.jcgt.org/published/0009/04/02/">Unorganized Unit Vectors Sets Quantization</a></li>
          <li>Thu 22 Apr: Joel Castellon, Moritz Bächer, Matt McCrory, Alfredo Ayala, Jeremy Stolarz, and Kenny Mitchell, <a href="http://jcgt.org/published/0009/03/01/">Active Learning for Interactive Audio-Animatronic Performance Design</a></li>
        </ul>

        <p>This year's I3D conference is virtual, and completely free. Talks will be streamed on the <a href="https://www.youtube.com/c/I3DSymposium">I3D YouTube channel</a>. There's plenty of other content besides the JCGT paper presentations! Be sure to <a href="http://i3dsymposium.github.io/2021/registration.html">register</a> to be able to access the speaker question and discussion forums, posters program, and presentation slides.</p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[Ray Traversal of OpenVDB Frustum Grids]]></title>
    <link>http://jcgt.org/published/0010/01/03</link>
    <guid>http://jcgt.org/published/0010/01/03</guid>
    <category>Paper</category>
    <pubDate>2021-03-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0010/01/03/icon.png">]]><![CDATA[<p>Manuel N. Gamito, Ray Traversal of OpenVDB Frustum Grids, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 11, no. 1, 49-63, 2021</p>]]><![CDATA[<p>The rendering of volumes with modern path-tracing techniques requires the sampling of collision distances within the media to determine where scattering events occur. For collision sampling, delta-tracking algorithms traverse a ray through clusters of voxels, while querying the minimum and maximum extinction values within each cluster. Such ray traversals can be performed easily on uniform grids but face difficulties due to the non-linearity of frustum grids. Frustum grids, however, can be very useful to distribute voxels efficiently, relative to a perspective camera, while keeping memory requirements low. We present an incremental technique for traversing frustum grids in OpenVDB&mdash;a widely adopted format for volume storage in production path tracing. Our technique is a generalization of the digital differential analyser algorithm that is the standard for ray traversal in uniform grids.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Fast Radius Search Exploiting Ray Tracing Frameworks]]></title>
    <link>http://jcgt.org/published/0010/01/02</link>
    <guid>http://jcgt.org/published/0010/01/02</guid>
    <category>Paper</category>
    <pubDate>2021-02-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0010/01/02/icon.png">]]><![CDATA[<p>I. Evangelou, G. Papaioannou, K. Vardis,  and A. A. Vasilakis, Fast Radius Search Exploiting Ray Tracing Frameworks, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 11, no. 1, 25-48, 2021</p>]]><![CDATA[<p>Spatial queries to infer information from the neighborhood of a set of points are very frequently performed in rendering and geometry processing algorithms. Traditionally, these are accomplished using radius and k-nearest neighbors search operations, which utilize kd-trees and other specialized spatial data structures that fall short of delivering high performance. Recently, advances in ray tracing performance, with respect to both acceleration data structure construction and ray traversal times, have resulted in a wide adoption of the ray tracing paradigm for graphics-related tasks that spread beyond typical image synthesis. In this work, we propose an alternative formulation of the radius search operation that maps the problem to the ray tracing paradigm, in order to take advantage of the available GPU-accelerated solutions for it. We demonstrate the performance gain relative to traditional spatial search methods, especially on dynamically updated sample sets, using two representative applications: geometry processing point-wise operations on scanned point clouds and global illumination via progressive photon mapping.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Improved Shader and Texture Level of Detail Using Ray Cones]]></title>
    <link>http://jcgt.org/published/0010/01/01</link>
    <guid>http://jcgt.org/published/0010/01/01</guid>
    <category>Paper</category>
    <pubDate>2021-01-25</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0010/01/01/icon.png">]]><![CDATA[<p>Tomas Akenine-M&ouml;ller, Cyril Crassin, Jakub Boksansky, Laurent Belcour, Alexey Panteleev,  and Oli Wright, Improved Shader and Texture Level of Detail Using Ray Cones, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 11, no. 1, 1-24, 2021</p>]]><![CDATA[<p>In real-time ray tracing, texture filtering is an important technique to increase image quality. Current games, such as <i>Minecraft with RTX</i> on Windows 10, use ray cones to determine texture-filtering footprints. In this paper, we present several improvements to the ray-cones algorithm that improve image quality and performance and make it easier to adopt in game engines. We show that the total time per frame can decrease by around 10% in a GPU-based path tracer, and we provide a public-domain implementation.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Unorganized Unit Vectors Sets Quantization]]></title>
    <link>http://jcgt.org/published/0009/04/02</link>
    <guid>http://jcgt.org/published/0009/04/02</guid>
    <category>Paper</category>
    <pubDate>2020-12-31</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/04/02/icon.png">]]><![CDATA[<p>Sylvain Rousseau and Tamy Boubekeur, Unorganized Unit Vectors Sets Quantization, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 4, 21-37, 2020</p>]]><![CDATA[<p>We present a new on-the-fly compression scheme for unorganized unit vectors sets which, using a hierarchical strategy, provides a significant gain in precision compared to classical independent unit vectors quantization methods. Given a unit vectors set lying in a subset of the unit sphere, our key idea consists in mapping it to the surface of the whole unit sphere, for which collaborative compression achieves a high signal-over-noise ratio. During the compression process, the unit vectors are grouped in a way that makes them more coherent, a property which is often used in ray tracing scenarios. The achieved compression ratio is superior to entropy encoding methods while being easy to implement. Moreover, the constant complexity of the mapping w.r.t. the number of unit vectors makes our method fast. For ease of use and replication, we provide a pseudo-code along with the C++ source code. Our scheme is instrumental for applications requiring on-the-fly compression of unorganized unit vectors sets, such as the direction of batch or wavefront of rays for rendering engines working in a distributed fashion, or the local surface orientations in an acquired 3D point cloud. The present manuscript is an extended version of a method previously published as a Technical Brief at the ACM SIGGRAPH Asia 2017 Conference.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Practical Hash-based Owen Scrambling]]></title>
    <link>http://jcgt.org/published/0009/04/01</link>
    <guid>http://jcgt.org/published/0009/04/01</guid>
    <category>Paper</category>
    <pubDate>2020-12-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/04/01/icon.png">]]><![CDATA[<p>Brent Burley, Practical Hash-based Owen Scrambling, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 4, 1-20, 2020</p>]]><![CDATA[<p>Owen's nested uniform scrambling maximally randomizes low-discrepancy sequences while preserving multidimensional stratification. This enables advantageous convergence for favorable integrands and bounded error for unfavorable ones, and makes it less prone to structured artifacts than other scrambling methods. The Owen-scrambled Sobol sequence in particular has been gaining popularity recently in computer graphics. However, implementations typically use a precomputed table of samples which imposes limits on sequence length and dimension.</p><p>In this paper, we adapt the Laine-Karras hash function to achieve an implementation of Owen scrambling for the Sobol sequence that is simple and efficient enough to perform on-the-fly evaluation for sequences of indeterminate length and dimension and in arbitrary sample order, readily permitting parallel, progressive, or adaptive integration. We combine this with nested uniform shuffling to enable decorrelated reuse of the sequence for padding to higher dimensions. We discuss practical-use considerations, and we outline how hash-based Owen scrambling can be extended to arbitrary base for use with non-base-two sequences.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Surface Gradient–Based Bump Mapping Framework]]></title>
    <link>http://jcgt.org/published/0009/03/04</link>
    <guid>http://jcgt.org/published/0009/03/04</guid>
    <category>Paper</category>
    <pubDate>2020-10-21</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/03/04/icon.png">]]><![CDATA[<p>Morten S. Mikkelsen, Surface Gradient–Based Bump Mapping Framework, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 3, 60-90, 2020</p>]]><![CDATA[<p>In this paper, a new framework is proposed for the layering and compositing of bump maps, including support for multiple sets of texture coordinates as well as procedurally generated texture coordinates and geometry. Furthermore, we provide proper support for bump maps defined on a volume, such as decal projectors, triplanar projection, and noise-based functions.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Interactive Meshing of User-Defined Point Sets]]></title>
    <link>http://jcgt.org/published/0009/03/03</link>
    <guid>http://jcgt.org/published/0009/03/03</guid>
    <category>Paper</category>
    <pubDate>2020-10-19</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/03/03/icon.png">]]><![CDATA[<p>Tsukasa Fukusato, Seung-Tak Noh, Takeo Igarashi,  and Daichi Ito, Interactive Meshing of User-Defined Point Sets, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 3, 39-58, 2020</p>]]><![CDATA[<p>This paper introduces an interactive framework to design low-poly 3D models from 2D model sheets, which shows how the model (e.g., sketched characters) looks from the front and side. First, we made a prototype tool for 2D artists, without 3D modeling skill, to generate 3D point sets from 2D model sheets. This tool is simple but still useful for artists to manually scan their 2D designs one by one. The generated 3D point sets (hereafter, user-defined point sets) are, in a sense, similar to point clouds produced by 3D scanners, but much more difficult to apply fully automatic methods to generate expected mesh from points due to their low density. Therefore, we also implement a novel meshing tool as an alpha-shape mechanism [Edelsbrunner and M&uuml;cke 1994], plus a painting metaphor where the user assigns spatially varying alpha-shapes to the point set while observing the intermediate results. We conducted a user study and confirmed that all participants could create 3D models within approximately 20 minutes. The proposed system could be a few times faster than a conventional method in the animation and game industry.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Hash Functions for GPU Rendering]]></title>
    <link>http://jcgt.org/published/0009/03/02</link>
    <guid>http://jcgt.org/published/0009/03/02</guid>
    <category>Paper</category>
    <pubDate>2020-10-17</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/03/02/icon.png">]]><![CDATA[<p>Mark Jarzynski and Marc Olano, Hash Functions for GPU Rendering, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 3, 21-38, 2020</p>]]><![CDATA[<p>In many graphics applications, a deterministic random hash provides the best source of random numbers. We evaluate a range of existing hash functions for random number quality using the TestU01 test suite, and GPU execution speed through benchmarking. We analyze the hash functions on the Pareto frontier to make recommendations on which hash functions offer the best quality/speed trade-off for the range of needs, from high-performance/low-quality to high-quality/low-performance. We also present a new class of hash tuned for multidimensional input and output that performs well at the high-quality end of this spectrum. We provide a supplemental document with test results and code for all hashes.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Active Learning for Interactive Audio-Animatronic Performance Design]]></title>
    <link>http://jcgt.org/published/0009/03/01</link>
    <guid>http://jcgt.org/published/0009/03/01</guid>
    <category>Paper</category>
    <pubDate>2020-10-11</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/03/01/icon.png">]]><![CDATA[<p>Joel Castellon, Moritz B&auml;cher, Matt McCrory, Alfredo Ayala, Jeremy Stolarz,  and Kenny Mitchell, Active Learning for Interactive Audio-Animatronic Performance Design, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 3, 1-19, 2020</p>]]><![CDATA[<p>We present a practical neural computational approach for interactive design of Audio-Animatronic&reg; facial performances. An offline quasi-static reference simulation, driven by a coupled mechanical assembly, accurately predicts hyperelastic skin deformations. To achieve interactive digital pose design, we train a shallow, fully connected neural network (KSNN) on input motor activations to solve the simulated mesh vertex positions. Our fully automatic synthetic training algorithm enables a first-of-its-kind learning active learning framework (GEN-LAL) for generative modeling of facial pose simulations. With adaptive selection, we significantly reduce training time to within half that of the unmodified training approach for each new Audio-Animatronic&reg; figure.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Rendering Layered Materials with Anisotropic Interfaces]]></title>
    <link>http://jcgt.org/published/0009/02/03</link>
    <guid>http://jcgt.org/published/0009/02/03</guid>
    <category>Paper</category>
    <pubDate>2020-06-20</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/02/03/icon.png">]]><![CDATA[<p>Philippe Weier and Laurent Belcour, Rendering Layered Materials with Anisotropic Interfaces, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 2, 37-57, 2020</p>]]><![CDATA[<p>We present a lightweight and efficient method to render layered materials with anisotropic interfaces. Our work extends the statistical framework of Belcour [2018] to handle anisotropic microfacet models. A key insight to our work is that when projected on the tangent plane, BRDF lobes from an anisotropic GGX distribution are well approximated by ellipsoidal distributions aligned with the tangent frame: its covariance matrix is diagonal in this space. We leverage this property and perform the adding-doubling algorithm on each anisotropy axis independently. We further update the mapping of roughness to directional variance and the evaluation of the average reflectance to account for anisotropy. We extensively tested this model against ground truth.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Massive Fractal in Days, Not Years]]></title>
    <link>http://jcgt.org/published/0009/02/02</link>
    <guid>http://jcgt.org/published/0009/02/02</guid>
    <category>Paper</category>
    <pubDate>2020-06-16</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/02/02/icon.png">]]><![CDATA[<p>Theodore Kim and Tom Duff, A Massive Fractal in Days, Not Years, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 2, 26-36, 2020</p>]]><![CDATA[<p>We present a new, numerically stable algorithm that allows us to compute a previously-infeasible, fractalized Stanford Bunny composed of 10 billion triangles. Recent work [Kim 2015] showed that it is feasible to compute quaternion Julia sets that conform to any arbitrary shape. However, the scalability of the technique was limited because it used high-order rationals requiring 80 bits of precision. We address the sources of numerical difficulty and allow the same computation to be performed using 64 bits. Crucially, this enables computation on the GPU, and computing a 10-billion-triangle model now takes 17 days instead of 10 years. We show that the resulting mesh is useful as a test case for a distributed renderer.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Detecting Bias in Monte Carlo Renderers using Welch’s t-test]]></title>
    <link>http://jcgt.org/published/0009/02/01</link>
    <guid>http://jcgt.org/published/0009/02/01</guid>
    <category>Paper</category>
    <pubDate>2020-06-13</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/02/01/icon.png">]]><![CDATA[<p>Alisa Jung, Johannes Hanika,  and Carsten Dachsbacher, Detecting Bias in Monte Carlo Renderers using Welch’s t-test, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 2, 1-25, 2020</p>]]><![CDATA[<p>When checking the implementation of a new renderer, one usually compares the output to that of a reference implementation. However, such tests require a large number of samples to be reliable, and sometimes they are unable to reveal very subtle differences that are caused by bias, but overshadowed by random noise. We propose using Welch’s t-test, a statistical test that reliably finds small bias even at low sample counts. Welch’s t-test is an established method in statistics to determine if two sample sets have the same underlying mean, based on sample statistics. We adapt it to test whether two renderers converge to the same image, i.e., the same mean per pixel or pixel region. We also present two strategies for visualizing and analyzing the test’s results, assisting us in localizing especially problematic image regions and detecting biased implementations with high confidence at low sample counts both for the reference and tested implementation.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Real-time Room Reverb in Large Scenes using Transport Path Precomputation]]></title>
    <link>http://jcgt.org/published/0009/01/03</link>
    <guid>http://jcgt.org/published/0009/01/03</guid>
    <category>Paper</category>
    <pubDate>2020-03-31</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/01/03/icon.png">]]><![CDATA[<p>Gregor M&uuml;ckl and Carsten Dachsbacher, Real-time Room Reverb in Large Scenes using Transport Path Precomputation, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 1, 33-53, 2020</p>]]><![CDATA[<p>Believable immersive virtual environments require accurate sound in addition to plausible graphics to be convincing to the user. We present a practical and fast method to compute sound reflections for moving sources and listeners in virtual scenes in real time, even on low-end hardware. In a preprocessing step we compute transport paths starting from possible source positions and store virtual diffuse sources along the path. At runtime we connect these subpaths to the actual source position and the virtual diffuse sources to the microphone to compute the impulse response. Our method can be implemented as a single pass GPU rendering process and requires only a few milliseconds for each update.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Addendum to JCGT v4 n1]]></title>
    <link>http://jcgt.org/news/2020-02-14-addendum.html</link>
    <guid>http://jcgt.org/news/2020-02-14-addendum.html</guid>
    <category>News</category>
    <pubDate>2020-02-14</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        The author of <a href="http://www.jcgt.org/published/0004/01/03/">http://www.jcgt.org/published/0004/01/03/</a> recently became aware of relevant prior work that should have been cited in the original publication. We are maintaining the paper as originally published, but have added an supplemental document noting the missing reference.<p></p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[Progressive Least-Squares Encoding for Linear Bases]]></title>
    <link>http://jcgt.org/published/0009/01/02</link>
    <guid>http://jcgt.org/published/0009/01/02</guid>
    <category>Paper</category>
    <pubDate>2020-01-25</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/01/02/icon.png">]]><![CDATA[<p>Thomas Roughton, Progressive Least-Squares Encoding for Linear Bases, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 1, 17-32, 2020</p>]]><![CDATA[<p>Linear basis functions can be used to encode spherical functions in a compressed format, wherein information such as a radiance field may be represented by a fixed set of basis functions and corresponding basis coefficients. In computer graphics, the function to encode is often generated by way of Monte-Carlo integration, and in contexts such as lightmap or irradiance volume baking it is useful to display a progressive result.</p><p>This paper presents an efficient, easily-implemented, GPU-compatible method for progressively performing approximate least-squares encoding into arbitrary linear bases. The method additionally supports approximate non-negative encoding, ensuring that the reconstructed function is positive-valued and improving appearance in a range of scenarios.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Performance Evaluation of Acceleration Structures for Cone Tracing Traversal]]></title>
    <link>http://jcgt.org/published/0009/01/01</link>
    <guid>http://jcgt.org/published/0009/01/01</guid>
    <category>Paper</category>
    <pubDate>2020-01-17</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0009/01/01/icon.png">]]><![CDATA[<p>Roman Wiche and David Kuri, Performance Evaluation of Acceleration Structures for Cone Tracing Traversal, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 10, no. 1, 1-16, 2020</p>]]><![CDATA[<p>This paper focuses on the technical question of how to apply acceleration structures used for polygonal scenes from ray tracing to cone tracing. We examine cone-traversal performance for k-d trees and bounding volume hierarchies. Our results demonstrate which accelerator to prefer for cone tracing given corresponding apertures and provide an estimation when cones of varying sizes could replace a specified number of ray samples with the same traversal performance but without subsampling.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[On Histogram-Preserving Blending for Randomized Texture Tiling]]></title>
    <link>http://jcgt.org/published/0008/04/02</link>
    <guid>http://jcgt.org/published/0008/04/02</guid>
    <category>Paper</category>
    <pubDate>2019-11-08</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0008/04/02/icon.png">]]><![CDATA[<p>Brent Burley, On Histogram-Preserving Blending for Randomized Texture Tiling, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 9, no. 4, 31-53, 2019</p>]]><![CDATA[<p>To support interactive authoring of high-resolution randomly tiled textures, we modify the histogram-preserving tiling algorithm of Heitz and Neyret to avoid any lengthy preprocessing. Instead of calculating a 3D histogram transformation by optimal transport, which can take minutes even at low resolution, the input texture is transformed using per-channel 1D lookup tables, constructed trivially from the input histogram on texture load. Three sources of clipping are described. Modifying the algorithm to use a truncated Gaussian distribution and a novel soft-clipping contrast operator avoids clipping artifacts while retaining very high rendering performance. Per-channel histogram preservation is sufficient for most textures, but some will produce unwanted colorations; these can often be avoided by performing histogram preservation only on luminance. Exponentiating the blending weights can reduce ghosting artifacts and better preserve structured texture details.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Image-based Remapping of Spatially-varying Material Appearance]]></title>
    <link>http://jcgt.org/published/0008/04/01</link>
    <guid>http://jcgt.org/published/0008/04/01</guid>
    <category>Paper</category>
    <pubDate>2019-10-31</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0008/04/01/icon.png">]]><![CDATA[<p>Alejandro Sztrajman, Jaroslav K&rcaron;iv&aacute;nek, Alexander Wilkie,  and Tim Weyrich, Image-based Remapping of Spatially-varying Material Appearance, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 9, no. 4, 1-30, 2019</p>]]><![CDATA[<p>BRDF models are ubiquitous tools for the representation of material appearance. However, an astonishingly large number of different models are now in practical use. Both a lack of BRDF model standardization across implementations found in different renderers, as well as the often semantically different capabilities of various models, have become a major hindrance to the interchange of production assets between different rendering systems. Current attempts to solve this problem rely on manually finding visual similarities between models, or mathematical similarities between their functional shapes, which requires access to the shader implementation, usually unavailable in commercial renderers. We present a method for automatic translation of material appearance between different BRDF models that uses an image-based metric for appearance comparison and that delegates the interaction with the model to the renderer. We analyze the performance of the method, both with respect to robustness and also visual differences of the fits for multiple combinations of BRDF models. While it is effective for individual BRDFs, the computational cost does not scale well for spatially varying BRDFs. Therefore, we also present two regression schemes that approximate the shape of the transformation function and generate a reduced representation that evaluates instantly and without further interaction with the renderer. We present respective visual comparisons of the remapped SVBRDF models for commonly used renderers and shading models and show that our approach is able to extrapolate transformed BRDF parameters better than other complex regression schemes. Finally, we analyze the transformation between specular and metallic workflows, comparing our results with two analytic conversions.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Fast and Memory Saving Marching Cubes 33 Implementation with the Correct Interior Test]]></title>
    <link>http://jcgt.org/published/0008/03/01</link>
    <guid>http://jcgt.org/published/0008/03/01</guid>
    <category>Paper</category>
    <pubDate>2019-08-08</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0008/03/01/icon.png">]]><![CDATA[<p>David Vega, Javier Abache,  and David Coll, A Fast and Memory Saving Marching Cubes 33 Implementation with the Correct Interior Test, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 9, no. 3, 1-18, 2019</p>]]><![CDATA[<p>The Marching Cubes (MC) algorithm is used to generate isosurfaces from a regular 3D grid, and it is widely used in visualization of volume data from CT scan, MRI, X-ray diffraction, quantum mechanical calculations and mathematical functions. In this report we introduce a C language code of the Marching Cubes 33 (MC33) algorithm. The routines of this implementation were optimized for a fast execution and low memory consumption. The array of triangles, point coordinates and normal vectors of the generated surfaces do not require an extensive continuous memory, which improves the memory allocation. The interior test was properly performed considering that one of the four main diagonals of the grid cube must first be selected.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Dynamic Diffuse Global Illumination with Ray-Traced Irradiance Fields]]></title>
    <link>http://jcgt.org/published/0008/02/01</link>
    <guid>http://jcgt.org/published/0008/02/01</guid>
    <category>Paper</category>
    <pubDate>2019-06-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0008/02/01/icon.png">]]><![CDATA[<p>Zander Majercik, Jean-Philippe Guertin, Derek Nowrouzezahrai,  and Morgan McGuire, Dynamic Diffuse Global Illumination with Ray-Traced Irradiance Fields, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 9, no. 2, 1-30, 2019</p>]]><![CDATA[<p>We show how to compute global illumination efficiently in scenes with dynamic objects and lighting. We extend classic irradiance probes to a compact encoding of the full irradiance field in a scene. First, we compute the dynamic irradiance field using an efficient GPU memory layout, geometric ray tracing, and appropriate sampling rates without down-sampling or filtering prohibitively-large spherical textures. Second, we devise a robust filtered irradiance query, using a novel visibility-aware moment-based interpolant. We experimentally validate performance and accuracy tradeoffs and show that our method of dynamic diffuse global illumination (DDGI) robustly lights scenes of varying geometric and radiometric complexity. For completeness, we demonstrate results with a state of the art glossy ray tracing term for sampling the full dynamic light field and include reference GLSL code.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Efficient Generation of Points that Satisfy Two-Dimensional Elementary Intervals]]></title>
    <link>http://jcgt.org/published/0008/01/04</link>
    <guid>http://jcgt.org/published/0008/01/04</guid>
    <category>Paper</category>
    <pubDate>2019-02-27</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0008/01/04/icon.png">]]><![CDATA[<p>Matt Pharr, Efficient Generation of Points that Satisfy Two-Dimensional Elementary Intervals, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 9, no. 1, 56-68, 2019</p>]]><![CDATA[<p>Precomputing high-quality sample points has been shown to be a useful technique for Monte Carlo integration in rendering; doing so allows optimizing properties of the points without the performance constraints of generating samples during rendering. A particularly useful property to incorporate is stratification across elementary intervals, which has been shown to reduce error in Monte Carlo integration. This is a key property of the recently-introduced progressive multi-jittered, pmj02 and pmj02bn points [Christensen et al. 2018].</p><p>For generating such sets of sample points, it is important to be able to efficiently choose new samples that are not in elementary intervals occupied by existing samples. Random search, while easy to implement, quickly becomes infeasible after a few thousand points. We describe an algorithm that efficiently generates 2D sample points that are stratified with respect to sets of elementary intervals. If a total of n sample points are being generated, then for each sample, our algorithm uses O(n<sup>1/2</sup>) time to build a data structure that represents the regions where a next sample may be placed. Given this data structure, valid samples can be generated in O(1) time. We demonstrate the utility of our method by generating much larger sets of pmj02bn points than were feasible previously</p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Multiple-Scattering Microfacet Model for Real-Time Image Based Lighting]]></title>
    <link>http://jcgt.org/published/0008/01/03</link>
    <guid>http://jcgt.org/published/0008/01/03</guid>
    <category>Paper</category>
    <pubDate>2019-01-22</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0008/01/03/icon.png">]]><![CDATA[<p>Carmelo J. Fdez-Ag&uuml;era, A Multiple-Scattering Microfacet Model for Real-Time Image Based Lighting, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 9, no. 1, 45-55, 2019</p>]]><![CDATA[<p>This article introduces an extension to real-time image-based lighting models that incorporates multiple scattering; it is suitable for real-time applications due to its low run-time cost. The main insight is that the precomputed integrals used for computing the single scattering term already contain all the information needed to simulate the remaining light bounces. The result is a technique that adds little overhead compared to its single scattering counterpart, but accomplishes perfect energy conservation and preservation. Even though the derivation is presented for a GGX bidirectional reflectance distribution function (BRDF), it can be trivially applied to other models as long as the split-sum approximation can be used to precompute the BRDF integral, or to find an analytical fit to it.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Implementation of Fast and Adaptive Procedural Cellular Noise]]></title>
    <link>http://jcgt.org/published/0008/01/02</link>
    <guid>http://jcgt.org/published/0008/01/02</guid>
    <category>Paper</category>
    <pubDate>2019-01-17</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0008/01/02/icon.png">]]><![CDATA[<p>Th&eacute;o Jonchier, Alexandre Derouet-Jourdan,  and Marc Salvati, Implementation of Fast and Adaptive Procedural Cellular Noise, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 9, no. 1, 35-44, 2019</p>]]><![CDATA[<p>Cellular noise as defined by Worley is a useful tool to render natural phenomena, such as skin cells, reptiles scales, or rock minerals. It is computed at each position in texture space by finding the closest feature points in a grid. In a preliminary work, we showed in 2D space how to obtain non-uniform distribution of points by subdividing cells in the grid and that there is an optimal traversal order of the grid cells. In this paper, we generalize these results to higher dimensions, and we give details about their implementation. Our optimal traversal in 3D proves to be 15% to 20% faster than the standard Worley algorithm.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Montage4D: Real-time Seamless Fusion and Stylization of Multiview Video Textures]]></title>
    <link>http://jcgt.org/published/0008/01/01</link>
    <guid>http://jcgt.org/published/0008/01/01</guid>
    <category>Paper</category>
    <pubDate>2019-01-17</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0008/01/01/icon.png">]]><![CDATA[<p>Ruofei Du, Ming Chuang, Wayne Chang, Hugues Hoppe,  and Amitabh Varshney, Montage4D: Real-time Seamless Fusion and Stylization of Multiview Video Textures, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 9, no. 1, 1-34, 2019</p>]]><![CDATA[<p>The commoditization of virtual and augmented reality devices and the availability of inexpensive consumer depth cameras have catalyzed a resurgence of interest in spatiotemporal performance capture. Recent systems like Fusion4D and Holoportation address several crucial problems in the real-time fusion of multiview depth maps into volumetric and deformable representations. Nonetheless, stitching multiview video textures onto dynamic meshes remains challenging due to imprecise geometries, occlusion seams, and critical time constraints. In this paper, we present a practical solution for real-time seamless texture montage for dynamic multiview reconstruction. We build on the ideas of dilated depth discontinuities and majority voting from Holoportation to reduce ghosting effects when blending textures. In contrast to that approach, we determine the appropriate blend of textures per vertex using view-dependent rendering techniques, so as to avert fuzziness caused by the ubiquitous normal-weighted blending. By leveraging geodesics-guided diffusion and temporal texture fields, our algorithm mitigates spatial occlusion seams while preserving temporal consistency. Experiments demonstrate significant enhancement in rendering quality, especially in detailed regions such as faces. Furthermore, we present our preliminary exploration of real-time stylization and relighting to empower Holoportation users to interactively stylize live 3D content. We envision a wide range of applications for Montage4D, including immersive telepresence for business, training, and live entertainment.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Costs to run JCGT for 2018]]></title>
    <link>http://jcgt.org/news/2019-01-05-expenses.html</link>
    <guid>http://jcgt.org/news/2019-01-05-expenses.html</guid>
    <category>News</category>
    <pubDate>2019-01-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        I got a lot of positive responses from the post last January about the costs to run JCGT. In the spirit of openness, here is an update for the past year.
        <style>
#budget20190105 tr:nth-child(odd) { background: AliceBlue }
#budget20190105 tr:nth-child(even) { background: LightYellow }
#budget20190105 td:nth-child(1) { width: 35% }
#budget20190105 td:nth-child(2) { width: 20% }
#budget20190105 td:nth-child(3) { width: auto; text-align: center }
#budget20190105 td:nth-child(4) { width:100%; background: white }
        </style>
        <table id="budget20190105">
          <tbody><tr>
            <td>Principal Staff<br>(Editor in Chief, Managing Editor, Advisory board)</td>
            <td>volunteer</td>
            <td>$0</td>
            <td></td>
          </tr>
          <tr>
            <td>Editorial Board</td>
            <td>volunteer</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>External Reviewers</td>
            <td>volunteer</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>Copyediting</td>
            <td>volunteer by Managing Editor</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>Review management</td>
            <td>email, google apps</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>Web site design and scripting</td>
            <td>volunteer by EIC</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>Domain registration</td>
            <td>Paid informally by staff</td>
            <td>$16.00</td>
          </tr>
          <tr>
            <td>Hosting, DNS, CDN (Amazon)</td>
            <td>Paid informally by staff</td>
            <td>$78.88</td>
          </tr>
          <tr>
          	<th>Total</th><td></td><th>$94.88</th>
          </tr>
        </tbody></table>

        <p>The bulk of the hosting costs are for CDN bandwidth charges. Our biggest month was September, driven by traffic for our most popular paper of the year, <a href="http://www.jcgt.org/published/0007/03/04/">A Ray-Box Intersection Algorithm and Efficient Dynamic Voxel Rendering</a>.</p>

        <p>In other costs, we're similar to other journals and conferences in relying on a volunteer editor in chief, volunteer associate editors, and volunteer reviewers. We are unusual in having a formal copyediting stage before publication, and are indebted to our Managing Editor for that service.</p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[Sampling the GGX Distribution of Visible Normals]]></title>
    <link>http://jcgt.org/published/0007/04/01</link>
    <guid>http://jcgt.org/published/0007/04/01</guid>
    <category>Paper</category>
    <pubDate>2018-11-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0007/04/01/icon.png">]]><![CDATA[<p>Eric Heitz, Sampling the GGX Distribution of Visible Normals, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 8, no. 4, 1-13, 2018</p>]]><![CDATA[<p>Importance sampling microfacet BSDFs using their Distribution of Visible Normals (VNDF) yields significant variance reduction in Monte Carlo rendering. In this article, we describe an efficient and exact sampling routine for the VNDF of the GGX microfacet distribution. This routine leverages the property that GGX is the distribution of normals of a truncated ellipsoid and sampling the GGX VNDF is equivalent to sampling the 2D projection of this truncated ellipsoid. To do that, we simplify the problem by using the linear transformation that maps the truncated ellipsoid to a hemisphere. Since linear transformations preserve the uniformity of projected areas, sampling in the hemisphere configuration and transforming the samples back to the ellipsoid configuration yields valid samples from the GGX VNDF.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Ray-Box Intersection Algorithm and Efficient Dynamic Voxel Rendering]]></title>
    <link>http://jcgt.org/published/0007/03/04</link>
    <guid>http://jcgt.org/published/0007/03/04</guid>
    <category>Paper</category>
    <pubDate>2018-09-20</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0007/03/04/icon.png">]]><![CDATA[<p>Alexander Majercik, Cyril Crassin, Peter Shirley,  and Morgan McGuire, A Ray-Box Intersection Algorithm and Efficient Dynamic Voxel Rendering, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 8, no. 3, 66-81, 2018</p>]]><![CDATA[<p>We introduce a novel and efficient method for rendering large models composed of individually-oriented voxels. The core of this method is a new algorithm for computing the intersection point and normal of a 3D ray with an arbitrarily-oriented 3D box, which also has non-rendering applications in GPU physics, such as ray casting and particle collision detection. We measured throughput improvements of 2&times; to 10&times; for the intersection operation versus previous ray-box intersection algorithms on GPUs. Applying this to primary rays increases throughput 20&times; for direct voxel ray tracing with our method versus rasterization of optimal meshes computed from voxels, due to the combined reduction in both computation and bandwidth. Because this method uses no precomputation or spatial data structure, it is suitable for fully dynamic scenes in which every voxel potentially changes every frame. These improvements can enable a dramatic increase in dynamism, view distance, and scene density for visualization applications and voxel games such as <i>LEGO&reg; Worlds</i> and <i>Minecraft</i>. We provide GLSL code for both our algorithm and previous alternative optimized ray-box algorithms, and an Unreal Engine 4 modification for the entire rendering method.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Efficient Unbiased Rendering of Thin Participating Media]]></title>
    <link>http://jcgt.org/published/0007/03/03</link>
    <guid>http://jcgt.org/published/0007/03/03</guid>
    <category>Paper</category>
    <pubDate>2018-09-13</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0007/03/03/icon.png">]]><![CDATA[<p>Ryusuke Villemin, Magnus Wrenninge,  and Julian Fong, Efficient Unbiased Rendering of Thin Participating Media, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 8, no. 3, 50-65, 2018</p>]]><![CDATA[<p>In recent years, path tracing has become the dominant image synthesis technique for production rendering. Unbiased methods for volume integration have followed, and techniques such as delta tracking, ratio tracking and spectral decomposition tracking are all in active use, and this paper is focused on optimizing the underlying mechanics of these. We present a method for reducing the number of calls to the random number generator and show how modifications to the distance sampling strategy and interaction probabilities can help reduce variance when rendering thin homogeneous and heterogeneous volumes. Our methods are implemented in version 21.7 of Pixar’s RenderMan software.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Celestial Walk: A Terminating, Memoryless Walk for Convex Subdivisions]]></title>
    <link>http://jcgt.org/published/0007/03/02</link>
    <guid>http://jcgt.org/published/0007/03/02</guid>
    <category>Paper</category>
    <pubDate>2018-09-04</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0007/03/02/icon.png">]]><![CDATA[<p>Wouter Kuijper, Victor Ermolaev,  and Olivier Devillers, Celestial Walk: A Terminating, Memoryless Walk for Convex Subdivisions, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 8, no. 3, 29-49, 2018</p>]]><![CDATA[<p>A common solution for routing messages or performing point location in planar subdivisions consists in <i>walking</i> from a face to another using neighboring relationships. If the next face does not depend on the previously visited faces, the walk is called <i>memoryless</i>. We present a new memoryless strategy for convex subdivisions. The known alternatives are <i>straight walk</i> which is a bit slower and not memoryless, and <i>visibility</i> walk which is guaranteed to work properly only for Delaunay triangulations. We prove termination of our walk using a novel distance measure that, for our proposed walking strategy, is strictly monotonically decreasing.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[View-warped Multi-view Soft Shadowing for Local Area Lights]]></title>
    <link>http://jcgt.org/published/0007/03/01</link>
    <guid>http://jcgt.org/published/0007/03/01</guid>
    <category>Paper</category>
    <pubDate>2018-07-26</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0007/03/01/icon.png">]]><![CDATA[<p>Adam Marrs, Benjamin Watson,  and Christopher G. Healey, View-warped Multi-view Soft Shadowing for Local Area Lights, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 8, no. 3, 1-28, 2018</p>]]><![CDATA[<p>Rendering soft shadows cast by dynamic objects in real time with few visual artifacts is challenging to achieve. We introduce a new algorithm for local light sources that exhibits fewer artifacts than fast single-view approximations and is faster than high-quality multi-view solutions. Inspired by layered depth images, image warping, and point-based rendering, our algorithm traverses complex occluder geometry once and creates an optimized multi-view point cloud as a proxy. We then render many depth maps simultaneously on graphics hardware using GPU Compute. By significantly reducing the time spent producing depth maps, our solution presents a new alternative for applications that cannot yet afford the most accurate methods, but that strive for higher quality shadows than possible with common approximations.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Cube-to-sphere projections for procedural texturing and beyond]]></title>
    <link>http://jcgt.org/published/0007/02/01</link>
    <guid>http://jcgt.org/published/0007/02/01</guid>
    <category>Paper</category>
    <pubDate>2018-06-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0007/02/01/icon.png">]]><![CDATA[<p>Matt Zucker and Yosuke Higashi, Cube-to-sphere projections for procedural texturing and beyond, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 8, no. 2, 1-22, 2018</p>]]><![CDATA[<p>Motivated by efficient GPU procedural texturing of the sphere, we describe several approximately equal-area cube-to-sphere projections. The projections not only provide low-distortion UV mapping, but also enable efficient generation of jittered point sets with O(1) nearest neighbor lookup. We provide GLSL implementations of the projections, with several real-time procedural texturing examples. Our numerical results summarize the various methods’ ability to preserve projected areas as well as their performance on both integrated and discrete GPUs. More broadly, the overall cube-to-sphere approach provides an underexplored avenue for adopting existing 2D grid-based methods to the sphere. As an example, we showcase fast Poisson disk sampling.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[All I3D short conference papers invited to extend to JCGT]]></title>
    <link>http://jcgt.org/news/2018-05-17-I3DinJCGT.html</link>
    <guid>http://jcgt.org/news/2018-05-17-I3DinJCGT.html</guid>
    <category>News</category>
    <pubDate>2018-05-17</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        <p>Since the beginning of the partnership between the <a href="http://www.i3dsymposium.org/">ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D)</a> and JCGT five years ago, 3-5 regular JCGT papers have taken advantage of the opportunity to present at I3D each year, and JCGT has invited 3-5 I3D papers to extend for JCGT. These extended papers must have at least 25% new material, generally suggested to focus toward the practical details that are left out of papers in an academic conference, new developments, and robustness of the presented techniques.</p>
        <p>This year, with the creation of the new Proceedings of the ACM journal, I3D has split their accepted papers into two categories. 20+ I3D papers will be published directly in PACM, and 14 "conference" papers appear only in the I3D proceedings. I'm pleased to announce that JCGT is inviting extended versions for <b>all</b> 14 I3D "conference" papers this year. This gives <b>every</b> paper published in I3D 2018 a path to journal publication, either directly in PACM or though extension to JCGT.</p>
        <p>The invited papers are:
        </p><ul>
          <li>Floyd Chitalu, Christophe Dubach, and Taku Komura. Bulk-Synchronous Parallel BVH Traversal for Collision Detection on GPUs.</li>
          <li>Ruofei Du, Ming Chuang, Wayne Chang, Hugues Hoppe, and Amitabh Varshney. Montage4D: Interactive Seamless Fusion of Multiview Video Textures.</li>
          <li>Alex Frasson, Tiago Augusto Engel and Cesar Tadeu Pozzer. Efficient Screen-Space Rendering of Vector Features on Virtual Terrains.</li>
          <li>Eric Heitz, Stephen Hill, and Morgan McGuire. Combining Analytic Direct Illumination and Stochastic Shadows.</li>
          <li>Hao Jiang, Zhigang Deng, Mingliang Xu, Xiangjun He, Tianlu Mao, and Zhaoqi Wang. An Emotion Evolution based Model for Collective Behavior Simulation.</li>
          <li>Alain Juarez-Perez and Marcelo Kallmann. Fast Behavioral Locomotion with Layered Navigation Meshes.</li>
          <li>Benjamin Keinert, Jana Martschinke and Marc Stamminger. Learning Real-Time Ambient Occlusion from Distance Representations.</li>
          <li>Gregor Mückl and Carsten Dachsbacher. Transport Path Precomputation for Real-time Room Reverb.</li>
          <li>Julien Philip and George Drettakis. Plane-Based Multi-View Inpainting for Image-Based Rendering in Large Scenes.</li>
          <li>Iana Podkosova and Hannes Kaufmann. Mutual Collision Avoidance During Walking in Real and Collaborative Virtual Environments.</li>
          <li>Nghia Truong and Cem Yuksel. A Narrow-Range Filter for Screen-Space Fluid Rendering.</li>
          <li>Sai-Keung Wong, Yi-Hung Chou, and Hsiang-Yu Yang. A Framework for Simulating Agent-Based Cooperative Tasks in Crowd Simulation.</li>
          <li>Kai Xiao, Gabor Liktor, and Karthik Vaidyanathan. Coarse Pixel Shading with Temporal Supersampling.</li>
          <li>Weidan Xiong, Pengbo Zhang, Pedro V Sander, and Ajay Joneja. Shape-Inspired Architectural Design.</li>
        </ul>
        <p></p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[Fast, high-quality rendering of liquids generated using large scale SPH simulation]]></title>
    <link>http://jcgt.org/published/0007/01/02</link>
    <guid>http://jcgt.org/published/0007/01/02</guid>
    <category>Paper</category>
    <pubDate>2018-03-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0007/01/02/icon.png">]]><![CDATA[<p>Xiangyun Xiao, Shuai Zhang,  and Xubo Yang, Fast, high-quality rendering of liquids generated using large scale SPH simulation, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 8, no. 1, 17-39, 2018</p>]]><![CDATA[<p>Particle-based methods like smoothed particle hydrodynamics (SPH) are increasingly adopted for large-scale fluid simulation in interactive computer graphics. However, surface rendering for such dynamic particle sets is challenging: current methods either produce low-quality results, or they are time consuming. In this paper, we introduce a novel approach to render high-quality fluid surfaces in screen space. Our method combines the techniques of particle splatting, ray-casting, and surface-normal estimation. We apply particle splatting to accelerate the ray-casting process, estimating the surface normal using principal component analysis (PCA) and a GPU-based technique to further accelerate our method. Our method can produce high-quality smooth surfaces while preserving thin and sharp details of large-scale fluids. The computation and memory cost of our rendering step depends only on the image resolution. These advantages make our method very suitable for previewing or rendering hundreds of millions of particles interactively. We demonstrate the efficiency and effectiveness of our method by rendering various fluid scenarios with different-sized particle sets.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[JCGT papers to be presented at I3D, extended I3D papers to be published in JCGT]]></title>
    <link>http://jcgt.org/news/2018-03-01-JCGTatI3D.html</link>
    <guid>http://jcgt.org/news/2018-03-01-JCGTatI3D.html</guid>
    <category>News</category>
    <pubDate>2018-03-01</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        <p>As part of the continued relationship between JCGT and the <a href="http://www.i3dsymposium.org/">ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games</a>, papers published in JCGT have the option to be presented by the authors at the annual I3D conference. This year, the following JCGT papers will be presented at I3D in Montreal, May 15-18, 2018:</p>
        <ul>
          <li>Eric Lengyel, <a href="/published/0006/02/02">GPU-Centered Font Rendering Directly from Glyph Outlines</a>.</li>
          <li>Christiaan Gribble, Ingo Wald, and Jefferson Amstutz, <a href="/published/0005/04/01">Implementing Node Culling Multi-Hit BVH Traversal in Embree</a>.</li>
          <li>Alexander V. Popov, <a href="/published/0005/04/03">An Efficient Depth Linearization Method for Oblique View Frustums</a>.</li>
        </ul>

        <p>In addition, many of you may know that I3D has had some changes this year, with the upcoming Proceedings of the ACM (PACM) on Computer Graphics and Interactive Techniques. I3D papers accepted as <i>long papers</i> will appear in the I3D proceedings and also be published in PACM. It has not been clear how that will affect the long-standing relationship between I3D and JCGT that has allowed JCGT papers to be presented at I3D and selected I3D papers to be extended to JCGT.</p>
        <p>I am happy to announce that this relationship will be continuing in both directions. Any <i>short papers</i> at I3D will still be eligible to be invited to extend to JCGT. Papers invited to JCGT will be announced at the conference in Montreal. These papers do need to be extended with material not included in the original I3D paper, but given JCGT's focus on "Gems"-like details, there is often a JCGT paper hidden in the work already done, but not reported for a traditional academic conference paper. Invited extensions will receive a second review, focusing on the fit of the extended paper to JCGT and the merits of the new material.</p>  

        <p>Though invited extensions are a fast track to getting the unpublished details of a technique into JCGT, we welcome submitted extensions from any conference. Please refer to the original and identify your paper as an extension when submitting.</p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[Efficient Rendering of Linear Brush Strokes]]></title>
    <link>http://jcgt.org/published/0007/01/01</link>
    <guid>http://jcgt.org/published/0007/01/01</guid>
    <category>Paper</category>
    <pubDate>2018-02-14</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0007/01/01/icon.png">]]><![CDATA[<p>Apoorva Joshi, Efficient Rendering of Linear Brush Strokes, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 8, no. 1, 1-16, 2018</p>]]><![CDATA[<p>We introduce a fast approach to rendering brush strokes with variable hardness, diameter, and flow, for raster image-editing applications. Given an N-pixel-long linear brush stroke with diameter M, our approach reduces the time complexity from O(NM<sup>2</sup>) to O(NM), while enabling the stroke to be rendered in a single GPU draw call and while avoiding overdraw.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Costs to run an open access journal]]></title>
    <link>http://jcgt.org/news/2018-01-04-expenses.html</link>
    <guid>http://jcgt.org/news/2018-01-04-expenses.html</guid>
    <category>News</category>
    <pubDate>2018-01-04</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        I've talked to a few people recently who were curious about how a journal like JCGT can be totally free to submit and to read, and how that model is sustainable. Here's a rundown of the parts of running JCGT and what they cost:
        <style>
#budget20180104 tr:nth-child(odd) { background: AliceBlue }
#budget20180104 tr:nth-child(even) { background: LightYellow }
#budget20180104 td:nth-child(1) { width: 35% }
#budget20180104 td:nth-child(2) { width: 20% }
#budget20180104 td:nth-child(3) { width: auto; text-align: center }
#budget20180104 td:nth-child(4) { width:100%; background: white }
        </style>
        <table id="budget20180104">
          <tbody><tr>
            <td>Principal Staff (Editor in Chief, Managing Editor, Advisory board)</td>
            <td>volunteer</td>
            <td>$0</td>
            <td></td>
          </tr>
          <tr>
            <td>Editorial Board</td>
            <td>volunteer</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>External Reviewers</td>
            <td>volunteer</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>Copyediting</td>
            <td>volunteer by Managing Editor and <br>(occasionally) EIC</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>Review management</td>
            <td>email, google spreadsheets, google drive</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>Web site design and scripting</td>
            <td>Custom static html/javascript created by advisory board member/former EIC Morgan McGuire and maintained by current EIC Marc Olano</td>
            <td>$0</td>
          </tr>
          <tr>
            <td>Domain registration &amp; email</td>
            <td>(Bluehost) cost paid informally by staff</td>
            <td>~$25/year</td>
          </tr>
          <tr>
            <td>Hosting, DNS, CDN</td>
            <td>(Amazon) cost paid informally by staff</td>
            <td>~$75/year</td>
          </tr>
        </tbody></table>

        <p>Pretty much all journals, open access or not, have a volunteer editor in chief, volunteer associate editors, and volunteer reviewers, so JCGT is not unusual in that regard.</p>

        <p>At least in computer science, it's common for journals to not have any formal copyediting. Papers are formatted according to a template and final "camera-ready" version is produced by the authors. JCGT is unusual in performing an explicit copyediting step, though I believe it improves the quality and consistency of the papers we publish.</p>

        <p>Review management software is definitely one place where we do skimp. There are <a href="http://oad.simmons.edu/oadwiki/Free_and_open-source_journal_management_software">free alternatives</a>, but so far with our submission rate averaging about 30 papers per year, the manual version has worked well enough.</p>

        <p>Hosting related costs are the one thing we can't get entirely for free. The costs of about $100/year are paid out of pocket by some of the journal staff as a service to the community. Paraphrasing one of my colleagues, for less than the cost of a single traditional journal subscription, the whole community benefits.</p>
      ]]></description>
  </item>
  <item>
    <title><![CDATA[Call for JCGT papers to present at I3D 2018]]></title>
    <link>http://jcgt.org/news/2018-01-02-I3D.html</link>
    <guid>http://jcgt.org/news/2018-01-02-I3D.html</guid>
    <category>News</category>
    <pubDate>2018-01-02</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
        As part of the continuing partnership between JCGT and the <a href="http://i3dsymposium.org">ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D)</a>, papers published in JCGT are eligible to be presented in the plenary session of the annual I3D conference. Eligible papers should not have been previously presented (so invited extensions of previous I3D papers are not eligible), and should have been published since the last I3D submission deadline (when the similar call was made for last year's conference). If you have published a paper in JCGT within the past year and are interested in presenting at this year's I3D, email <a href="mailto:editor-in-chief@jcgt.org">editor-in-chief@jcgt.org</a>.
      ]]></description>
  </item>
  <item>
    <title><![CDATA[Triangle Reordering for Efficient Rendering in Complex Scenes]]></title>
    <link>http://jcgt.org/published/0006/03/03</link>
    <guid>http://jcgt.org/published/0006/03/03</guid>
    <category>Paper</category>
    <pubDate>2017-09-28</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0006/03/03/icon.png">]]><![CDATA[<p>Songfang Han and Pedro Sander, Triangle Reordering for Efficient Rendering in Complex Scenes, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 7, no. 3, 38-52, 2017</p>]]><![CDATA[<p>We introduce an automatic approach for optimizing the triangle rendering order of animated meshes with the objective of reducing overdraw while maintaining good post-transform vertex cache efficiency. Our approach is based on prior methods designed for static meshes. We propose an algorithm that clusters the space of viewpoints and key frames. For each cluster, we generate a triangle order that exhibits satisfactory vertex cache efficiency and low overdraw. Results show that our approach significantly improves overdraw throughout the entire animation sequence while only requiring a few index buffers. We expect that this approach will be useful for games and other real-time rendering applications that involve complex shading of articulated characters.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[JCGT site improvements and new RSS feed]]></title>
    <link>http://jcgt.org/news/2017-08-29-RSSfeed.html</link>
    <guid>http://jcgt.org/news/2017-08-29-RSSfeed.html</guid>
    <category>News</category>
    <pubDate>2017-08-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>JCGT has had several web infrastructure improvements over the past few months. Probably only a few will be externally visible. First, we've moved our hosting to Amazon's CDN, which should significantly improve site and paper loading time for those of you outside the US. Second, as an oft-requested feature, we now have an RSS feed <a href="/feed.xml"><img src="/rss.gif" width="16" height="16" alt="RSS"></a> of both papers and news items (like this one). The most recent few papers now appear on the <a href="/">main page</a>, though the <a href="/read.html">read</a> link will still get you to all published papers.</p>
                <p>Of course, the existing methods of finding out about new JCGT papers still exist: <a href="https://twitter.com/JCGT_announce">@JCGT_announce</a> on twitter, the <a href="https://www.facebook.com/pages/Journal-of-Computer-Graphics-Techniques/466102796749942">Journal of Computer Graphics Techniques</a> page on Facebook, and the <a href="https://groups.google.com/group/jcgt-announce">JCGT_announce@groups.google.com</a> email list.</p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Double Hierarchies for Directional Importance Sampling in Monte Carlo Rendering]]></title>
    <link>http://jcgt.org/published/0006/03/02</link>
    <guid>http://jcgt.org/published/0006/03/02</guid>
    <category>Paper</category>
    <pubDate>2017-08-28</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0006/03/02/icon.png">]]><![CDATA[<p>Norbert Bus and Tamy Boubekeur, Double Hierarchies for Directional Importance Sampling in Monte Carlo Rendering, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 7, no. 3, 25-37, 2017</p>]]><![CDATA[<p>We describe a novel representation of the light field tailored to improve importance sampling for Monte Carlo rendering. The domain of the light field i.e., the product space of spatial positions and directions is hierarchically subdivided into subsets on which local models characterize the light transport. The data structure, that is based on <i>double trees</i>, approximates the exact light field and enables very efficient queries for importance sampling and easy tracing of photons in the scene. The framework is simple yet flexible, enabling the usage of any type of local model for representing the light field, provided it can be efficiently importance sampled. The method also supports progressive refinement with an arbitrary number of photons. We provide a reference open source implementation.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Merging B-spline curves or surfaces using matrix representation]]></title>
    <link>http://jcgt.org/published/0006/03/01</link>
    <guid>http://jcgt.org/published/0006/03/01</guid>
    <category>Paper</category>
    <pubDate>2017-08-15</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0006/03/01/icon.png">]]><![CDATA[<p>Sz. B&eacute;la and M. Szilv&aacute;si-Nagy, Merging B-spline curves or surfaces using matrix representation, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 7, no. 3, 1-24, 2017</p>]]><![CDATA[<p>This paper presents a simple and general technique to transform non-uniform B-Splines to the B&eacute;zier or to power basis form. This conversion is very practical in representing spline curves or surfaces by explicit parametric vector functions in matrix form. We apply the proposed technique in a merging method of B-spline curve segments. The method merges B-spline curves iteratively with each other, and it is also extended to stitch B-spline surface patches even if they are overlapping each other or have possible gaps between them.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Generating stratified random lines in a square]]></title>
    <link>http://jcgt.org/published/0006/02/03</link>
    <guid>http://jcgt.org/published/0006/02/03</guid>
    <category>Paper</category>
    <pubDate>2017-06-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0006/02/03/icon.png">]]><![CDATA[<p>Peter Shirley and Chris Wyman, Generating stratified random lines in a square, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 7, no. 2, 48-54, 2017</p>]]><![CDATA[<p>When generating a set of uniformly distributed lines through a square, some care is needed to avoid bias in line orientation and position. We present a compact algorithm to generate unbiased uniformly distributed lines from a uniform point set over the unit square.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[GPU-Centered Font Rendering Directly from Glyph Outlines]]></title>
    <link>http://jcgt.org/published/0006/02/02</link>
    <guid>http://jcgt.org/published/0006/02/02</guid>
    <category>Paper</category>
    <pubDate>2017-06-14</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0006/02/02/icon.png">]]><![CDATA[<p>Eric Lengyel, GPU-Centered Font Rendering Directly from Glyph Outlines, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 7, no. 2, 31-47, 2017</p>]]><![CDATA[<p>This paper describes a method for rendering antialiased text directly from glyph outline data on the GPU without the use of any precomputed texture images or distance fields. This capability is valuable for text displayed inside a 3D scene because, in addition to a perspective projection, the transform applied to the text is constantly changing with a dynamic camera view. Our method overcomes numerical precision problems that produced artifacts in previously published techniques and promotes high GPU utilization with an implementation that naturally avoids divergent branching.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Symmetry-aware Sparse Voxel DAGs (SSVDAGs) for compression-domain tracing of high-resolution geometric scenes]]></title>
    <link>http://jcgt.org/published/0006/02/01</link>
    <guid>http://jcgt.org/published/0006/02/01</guid>
    <category>Paper</category>
    <pubDate>2017-05-12</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0006/02/01/icon.png">]]><![CDATA[<p>Alberto Jaspe Villanueva, Fabio Marton,  and Enrico Gobbetti, Symmetry-aware Sparse Voxel DAGs (SSVDAGs) for compression-domain tracing of high-resolution geometric scenes, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 7, no. 2, 1-30, 2017</p>]]><![CDATA[<p>Voxelized representations of complex 3D scenes are widely used to accelerate visibility queries in many GPU rendering techniques. Since GPU memory is limited, it is important that these data structures can be kept within a strict memory budget. Recently, directed acyclic graphs (DAGs) have been successfully introduced to compress sparse voxel octrees (SVOs), but they are limited to sharing identical regions of space. In this paper, we show that a more efficient lossless compression of geometry can be achieved while keeping the same visibility-query performance. This is accomplished by merging subtrees that are identical through a similarity transform and by exploiting the skewed distribution of references to shared nodes to store child pointers using a variabile bit-rate encoding. We also describe how, by selecting plane reflections along the main grid directions as symmetry transforms, we can construct highly compressed GPU-friendly structures using a fully out-of-core method. Our results demonstrate that state-of-the-art compression and real-time tracing performance can be achieved on high-resolution voxelized representations of real-world scenes of very different characteristics, including large CAD models, 3D scans, and typical gaming models, leading, for instance, to real-time GPU in-core visualization with shading and shadows of the full Boeing 777 at sub-millimeter precision. This article is based on an earlier work: <i>SSVDAGs: Symmetry-aware Sparse Voxel DAGs, in Proceedings of the 20th ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (c) ACM, 2016. <a href='http://dx.doi.org/10.1145/2856400.2856420'>http://dx.doi.org/10.1145/2856400.2856420</a></i>. We include here a more thorough exposition, a description of alternative construction and tracing methods, as well as additional results. In order to facilitate understanding, evaluation and extensions, the full source code of the method is provided in the supplementary material.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Improved Moment Shadow Maps for Translucent Occluders, Soft Shadows and Single Scattering]]></title>
    <link>http://jcgt.org/published/0006/01/03</link>
    <guid>http://jcgt.org/published/0006/01/03</guid>
    <category>Paper</category>
    <pubDate>2017-03-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0006/01/03/icon.png">]]><![CDATA[<p>Christoph Peters, Cedrick M&uuml;nstermann, Nico Wetzstein,  and Reinhard Klein, Improved Moment Shadow Maps for Translucent Occluders, Soft Shadows and Single Scattering, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 7, no. 1, 17-67, 2017</p>]]><![CDATA[<p>Like variance shadow maps, the recently proposed moment shadow maps can be filtered directly but they provide a substantially higher quality. We combine them with earlier approaches to enable three new applications. Shadows for translucent occluders are obtained by simply rendering to a moment shadow map with alpha blending.  Soft shadows in the spirit of percentage-closer soft shadows are rendered using two queries to a summed-area table of a moment shadow map. Single scattering is rendered through one lookup per pixel in a prefiltered moment shadow map with six channels. As a foundation we also propose improvements to moment shadow mapping itself. All these techniques scale particularly well to high output resolutions and enable proper antialiasing of shadows through extensive filtering.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Improved Accuracy when Building an Orthonormal Basis]]></title>
    <link>http://jcgt.org/published/0006/01/02</link>
    <guid>http://jcgt.org/published/0006/01/02</guid>
    <category>Paper</category>
    <pubDate>2017-03-27</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0006/01/02/icon.png">]]><![CDATA[<p>Nelson Max, Improved Accuracy when Building an Orthonormal Basis, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 7, no. 1, 9-16, 2017</p>]]><![CDATA[<p>Frisvad's method for building a 3D orthonormal basis from a unit vector has accuracy problems in its published floating point form. These problems are investigated and a partial fix is suggested, by replacing the threshold 0.9999999 by the threshold -0.99998796, which decreases the maximum error in the orthonormality conditions from 0.6250 to 0.0049.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Building an Orthonormal Basis, Revisited]]></title>
    <link>http://jcgt.org/published/0006/01/01</link>
    <guid>http://jcgt.org/published/0006/01/01</guid>
    <category>Paper</category>
    <pubDate>2017-03-27</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0006/01/01/icon.png">]]><![CDATA[<p>Tom Duff, James Burgess, Per Christensen, Christophe Hery, Andrew Kensler, Max Liani,  and Ryusuke Villemin, Building an Orthonormal Basis, Revisited, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 7, no. 1, 1-8, 2017</p>]]><![CDATA[<p>Frisvad describes a widely-used computational method for efficiently augmenting a given single unit vector with two other vectors to produce an orthonormal frame in three dimensions, a useful operation for any physically based renderer. However, the implementation has a precision problem: as the z component of the input vector approaches -1, floating point cancellation causes the frame to lose all precision. This paper introduces a solution to the precision problem and shows how to implement the resulting function in C++ with performance comparable to the original.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Announcing I3D 2017 papers invited to JCGT]]></title>
    <link>http://jcgt.org/news/2017-03-01-I3DinJCGT.html</link>
    <guid>http://jcgt.org/news/2017-03-01-I3DinJCGT.html</guid>
    <category>News</category>
    <pubDate>2017-03-01</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p> At the closing session of the <a href="http://www.i3dsymposium.org/">ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games</a>, we announced the following papers which have been invited to submit extended versions to JCGT. Congratulations to these authors!</p>
                <ul>
                    <li>Binh Huy Le and Zhigang Deng, <strong>Interactive Cage Generation for Mesh Deformation</strong>.</li>
                    <li>Morgan McGuire, Michael Mara, Derek Nowrouzezahrai, and David Luebke, <strong>Real-Time Global Illumination using Precomputed Light Field Probes</strong>.</li>
                    <li>Xiangyun Xiao, Shuai Zhang, and Xubo Yang, <strong>Real-Time High-Quality Surface Rendering for Large Scale Particle-based Fluids</strong>.</li>
                    <li>Yuriy O'Donnell and Matthäus Chajdas, <strong>Tiled Light Trees</strong>.</li>
                </ul>
                <p>Other conference paper authors (I3D or otherwise) who were not explicitly invited are still encouraged to submit extensions of your work to JCGT if you have at least 25% new material to say about the practical implementation or evaluation above and beyond what was included in your original  paper.</p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Announcing JCGT papers to be presented at I3D]]></title>
    <link>http://jcgt.org/news/2017-01-03-JCGTatI3D.html</link>
    <guid>http://jcgt.org/news/2017-01-03-JCGTatI3D.html</guid>
    <category>News</category>
    <pubDate>2017-01-03</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>As part of the continued relationship between JCGT and the <a href="http://www.i3dsymposium.org/">ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games</a>, papers published in JCGT have the option to be presented by the authors at the annual I3D conference. This year, the following JCGT papers will be presented at I3D in San Francisco, February 25-26, 2017:</p>
                <ul>
                    <li>Tuanfeng Y. Wang, Yuan Liu, Xuefeng Liu, Zhouwang Yang, Dongming Yan, and Ligang Liu, <a href="published/0005/03/02/">Global Stiffness Structural Optimization for 3D Printing under Unknown Loads</a>.</li>
                    <li>Valentin Fuetterling, Carsten Lojewski, Franz-Josef Pfreundt, and Achim Ebert, <a href="published/0004/04/05/">Efficient Ray Tracing Kernels for Modern CPU Architectures</a>.</li>
                    <li>Yusuke Tokuyoshi and Takahiro Harada, <a href="published/0005/01/02/">Stochastic Light Culling</a>.</li>
                    <li>Shinji Ogaki and Alexandre Derouet-Jourdan, <a href="published/0005/02/02/">An N-ary BVH Child Node Sorting Technique for Occlusion Tests</a>.</li> 
                    <li>Magnus Wrenninge, <a href="published/0005/01/01/">Efficient Rendering of Volumetric Motion Blur Using Temporally Unstructured Volumes</a>.</li>
                </ul>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Gradient-Domain Processing of Meshes]]></title>
    <link>http://jcgt.org/published/0005/04/04</link>
    <guid>http://jcgt.org/published/0005/04/04</guid>
    <category>Paper</category>
    <pubDate>2016-12-22</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/04/04/icon.png">]]><![CDATA[<p>Ming Chuang, Szymon Rusinkiewicz,  and Misha Kazhdan, Gradient-Domain Processing of Meshes, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 4, 44-55, 2016</p>]]><![CDATA[<p>This paper describes an implementation of gradient-domain processing for editing the geometry of triangle meshes in 3D. We show applications to mesh smoothing and sharpening and describe anisotropic extensions that enable edge-aware processing.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[An Efficient Depth Linearization Method for Oblique View Frustums]]></title>
    <link>http://jcgt.org/published/0005/04/03</link>
    <guid>http://jcgt.org/published/0005/04/03</guid>
    <category>Paper</category>
    <pubDate>2016-12-21</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/04/03/icon.png">]]><![CDATA[<p>Alexander V. Popov, An Efficient Depth Linearization Method for Oblique View Frustums, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 4, 36-43, 2016</p>]]><![CDATA[<p>Many modern 3D rendering techniques operate on fragment depths and require the sampling of depth textures, containing non-linear window space z-coordinate values, for depth values of certain fragments. In most cases, such values must be linearized before use, in particular to compare fragment depths, to perform view space arithmetic, or to compare depths of scenes rendered with different projections. Transforming depth from window space to view space may become complicated and unintuitive in the case of an oblique frustum with a modified near clipping plane that is not parallel to a conventional near plane. Such frustums can be utilized for effective and high-performance geometry clipping to a user-defined plane. This paper discusses an efficient depth value linearization method for an arbitrary projection matrix, which can be used in post-processing, ambient occlusion, and other depth-dependent effects.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Real-Time Global Illumination Using Precomputed Illuminance Composition with Chrominance Compression]]></title>
    <link>http://jcgt.org/published/0005/04/02</link>
    <guid>http://jcgt.org/published/0005/04/02</guid>
    <category>Paper</category>
    <pubDate>2016-12-13</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/04/02/icon.png">]]><![CDATA[<p>Johannes Jendersie, David Kuri,  and Thorsten Grosch, Real-Time Global Illumination Using Precomputed Illuminance Composition with Chrominance Compression, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 4, 8-35, 2016</p>]]><![CDATA[<p>In this paper we present a new real-time approach for indirect global illumination under dynamic lighting conditions. We use surfels to gather a sampling of the local illumination and propagate the light through the scene using a hierarchy and a set of precomputed light transport paths. The light is then aggregated into caches for lighting static and dynamic geometry. By using a spherical harmonics representation, caches preserve incident light directions to allow both diffuse and slightly glossy BRDFs for indirect lighting. We provide experimental results for up to eight bands of spherical harmonics to stress the limits of specular reflections. In addition, we apply a chrominance downsampling to reduce the memory overhead of the caches.</p>

<p>The sparse sampling of illumination in surfels also enables indirect lighting from many light sources and an efficient progressive multi-bounce implementation. Furthermore, any existing pipeline can be used for surfel lighting, facilitating the use of all kinds of light sources, including sky lights, without a large implementation effort. In addition to the general initial lighting, our method adds a simple way to incorporate area lights. If using an emissive term in the surfels, area lights are included in the precomputed light transport. Thus, precomputation takes care of the proper shadowing.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Implementing Node Culling Multi-Hit BVH Traversal in Embree]]></title>
    <link>http://jcgt.org/published/0005/04/01</link>
    <guid>http://jcgt.org/published/0005/04/01</guid>
    <category>Paper</category>
    <pubDate>2016-11-15</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/04/01/icon.png">]]><![CDATA[<p>Christiaan Gribble, Ingo Wald,  and Jefferson Amstutz, Implementing Node Culling Multi-Hit BVH Traversal in Embree, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 4, 1-7, 2016</p>]]><![CDATA[<p>We present an implementation of node culling multi-hit BVH traversal in Embree. Whereas previous work was limited by API restrictions, as of version 2.10.0, Embree respects ray state modifications from within its intersection callbacks (ICBs). This behavior permits an ICB-based implementation of the node-culling algorithm, but requires a trick to induce culling once the necessary conditions are met: candidate intersections are accepted with a (possibly) modified <i>t<sub>far</sub></i> value, so that ray traversal continues with an updated ray interval. We highlight a scalar implementation of the Embree ICB and report results for a vector implementation in OSPRay that improves performance by as much as 2x relative to naive multi-hit when users request fewer-than-all hits.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Memory Efficient Uniform Grid Build Process for GPUs]]></title>
    <link>http://jcgt.org/published/0005/03/04</link>
    <guid>http://jcgt.org/published/0005/03/04</guid>
    <category>Paper</category>
    <pubDate>2016-09-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/03/04/icon.png">]]><![CDATA[<p>Eugene M. Taranta II and Sumanta N. Pattanaik, A Memory Efficient Uniform Grid Build Process for GPUs, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 3, 50-67, 2016</p>]]><![CDATA[<p>Uniform grids are a common acceleration structure used to speed up intersection queries across various rendering and physics applications. These structures are attractive because they are easy to understand and exhibit fast build and query times. Recent advances in parallel algorithms have further increased their construction speed, although they still require substantial memory allocations throughout the build process and in their final representation. To address these memory consumption issues, we introduce a novel but easy to understand approach. Although our process is general, we demonstrate its effectiveness in an example real-time ray-tracing application where we see a 28% to 38% reduction in memory allocated for optimal grid densities and a 31% to 38% reduction in their final memory footprint. As grid densities increase, savings approach 45% and higher. We also show that our process does not sacrifice performance in terms of overall frame rates.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Fast Ray-Triangle Intersections by Coordinate Transformation]]></title>
    <link>http://jcgt.org/published/0005/03/03</link>
    <guid>http://jcgt.org/published/0005/03/03</guid>
    <category>Paper</category>
    <pubDate>2016-09-27</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/03/03/icon.png">]]><![CDATA[<p>Doug Baldwin and Michael Weber, Fast Ray-Triangle Intersections by Coordinate Transformation, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 3, 39-49, 2016</p>]]><![CDATA[<p>Ray-triangle intersection is a crucial calculation in ray tracing. We present a new algorithm for finding these intersections, occupying a different place in the spectrum of time-space trade-offs than existing algorithms do. Our algorithm provides faster ray-triangle intersection calculations at the expense of precomputing and storing a small amount of extra information for each triangle. Running under ideal experimental conditions, our algorithm is always faster than the standard M&ouml;ller and Trumbore algorithm, and faster than a highly tuned modern version of it except at very high ray-triangle hit rates. Replacing the M&ouml;ller and Trumbore algorithm with ours in a complete ray tracer speeds up image generation by between 1 and 6%, depending on the image. We have coded our method in C++, and provide two implementations as supplements to this article.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Global Stiffness Structural Optimization for 3D Printing under Unknown Loads]]></title>
    <link>http://jcgt.org/published/0005/03/02</link>
    <guid>http://jcgt.org/published/0005/03/02</guid>
    <category>Paper</category>
    <pubDate>2016-08-11</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/03/02/icon.png">]]><![CDATA[<p>Tuanfeng Y. Wang, Yuan Liu, Xuefeng Liu, Zhouwang Yang, Dongming Yan,  and Ligang Liu, Global Stiffness Structural Optimization for 3D Printing under Unknown Loads, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 3, 18-38, 2016</p>]]><![CDATA[<p>The importance of the stiffness of a 3D printed object has only been realized gradually over the years. 3D printed objects are always hollow with interior structure to make the fabrication process cost-effective while maintaining stiffness. State-of-the-art techniques are either <i>redundant</i> (using much more material than necessary to ensure the stiffness in any case) or optimize the structure under one of the most-probable load distributions, which may fall short in other distribution cases. Furthermore, unlike industrial products using <i>brittle</i> materials such as metal, cement, etc., where stress plays a very important role in structural analysis, materials used in 3D printing such as ABS, Nylon, resin, etc. are rather elastic (<i>flexible</i>). When considering structural problems of 3D printed objects, large-scale deformation always comes before reaching stress limits when loads are applied. We propose a novel approach for designing the interior of an object by optimizing the global stiffness &emdash; the maximum deformation under any possible load distribution. The basic idea is to maximize the smallest eigenvalue of the stiffness matrix. More specifically, we first simulate the object as a lightweight frame structure, and optimize both the size and the geometry using an eigen-mode-like formulation, interleaved with a topology clean. A postprocess is applied to generate the final object based on the optimized frame structure. The proposed method does not require specific load cases and material strengths as inputs, which the existing methods require. The results obtained from our experiments show that optimizing the interior structure under unknown loads automatically keeps strength where the structure is the weakest and is proven to be a powerful and reasonable design framework.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Efficient Stereoscopic Rendering of Building Information Models (BIM)]]></title>
    <link>http://jcgt.org/published/0005/03/01</link>
    <guid>http://jcgt.org/published/0005/03/01</guid>
    <category>Paper</category>
    <pubDate>2016-08-01</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/03/01/icon.png">]]><![CDATA[<p>Mikael Johansson, Efficient Stereoscopic Rendering of Building Information Models (BIM), <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 3, 1-17, 2016</p>]]><![CDATA[<p>This paper describes and investigates stereo instancing&mdash;a single-pass stereo rendering technique based on hardware-accelerated geometry instancing&mdash;for the purpose of rendering building information models (BIM) on modern head-mounted displays (HMD), such as the Oculus Rift. It is shown that the stereo instancing technique is very well suited for integration with query-based occlusion culling as well as conventional geometry instancing, and it outperforms the traditional two-pass stereo rendering approach, geometry shader-based stereo duplication, as well as brute-force stereo rendering of typical BIMs on recent graphics hardware.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[An N-ary BVH Child Node Sorting Technique for Occlusion Tests]]></title>
    <link>http://jcgt.org/published/0005/02/02</link>
    <guid>http://jcgt.org/published/0005/02/02</guid>
    <category>Paper</category>
    <pubDate>2016-06-09</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/02/02/icon.png">]]><![CDATA[<p>Shinji Ogaki and Alexandre Derouet-Jourdan, An N-ary BVH Child Node Sorting Technique for Occlusion Tests, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 2, 22-37, 2016</p>]]><![CDATA[<p>The cost of occlusion tests can be reduced by changing traversal order. In this paper, we introduce a very simple, yet general, cost model and an efficient algorithm to determine the optimal order of the child nodes of n-ary bounding volume hierarchies (BVH). Our cost model is derived by extending the shadow ray distribution heuristic from only binary BVH to n-ary BVH, and our algorithm does not require an extra BVH for shadow rays.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Mappings between Sphere, Disc, and Square]]></title>
    <link>http://jcgt.org/published/0005/02/01</link>
    <guid>http://jcgt.org/published/0005/02/01</guid>
    <category>Paper</category>
    <pubDate>2016-04-15</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/02/01/icon.png">]]><![CDATA[<p>Martin Lambers, Mappings between Sphere, Disc, and Square, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 2, 1-21, 2016</p>]]><![CDATA[<p>A variety of mappings between a sphere and a disc and between a disc and a square, as well as combinations of both, are used in computer graphics applications, resulting in mappings between spheres and squares. Many options exist for each type of mapping; to pick the right methods for a given application requires knowledge about the nature and magnitude of mapping distortions.</p>

<p>This paper provides an overview of forward and inverse mappings between a unit sphere, a unit disc, and a unit square. Quality measurements relevant for computer graphics applications are derived from tools used in the field of map projection, and a comparative analysis of the mapping methods is given.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Stochastic Light Culling]]></title>
    <link>http://jcgt.org/published/0005/01/02</link>
    <guid>http://jcgt.org/published/0005/01/02</guid>
    <category>Paper</category>
    <pubDate>2016-03-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/01/02/icon.png">]]><![CDATA[<p>Yusuke Tokuyoshi and Takahiro Harada, Stochastic Light Culling, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 1, 35-60, 2016</p>]]><![CDATA[<p>Light culling techniques (e.g., <i>tiled lighting</i>) with clamping light ranges are often used in real-time applications such as video games. However, they can produce noticeable image darkening for many lights, because a bias is introduced as the part of the light emission farther than the clamping range is always disregarded. To avoid such undesirable bias, the method proposed in this paper uses a stochastic light culling method that randomly determines a light range using <i>Russian roulette</i>. When this random range is calculated based on a user-specified error bound, our method produces a sublinear cost for shading after the culling process due to the unbiasedness. Our method changes the fall-off function of light stochastically. Since this approach is independent from culling techniques, any existing light culling frameworks can be used. In addition, our stochastic light culling is not only applicable to real-time rendering, but also offline rendering using path tracing. For randomly distributed shading points produced by a multi-bounce path tracer, we also introduces a bounding sphere tree-based light culling algorithm and its GPU implementation. Using our method, we are able to render tens of thousands of light sources with a smaller error than previous techniques for both real-time and offline rendering.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Announcing I3D papers invited to JCGT]]></title>
    <link>http://jcgt.org/news/2016-02-28-I3DinJCGT.html</link>
    <guid>http://jcgt.org/news/2016-02-28-I3DinJCGT.html</guid>
    <category>News</category>
    <pubDate>2016-02-28</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p> At the closing session of the <a href="http://www.i3dsymposium.org/">ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games</a>, we announced the following papers which have been invited to submit extended versions to JCGT. Congratulations to these authors!</p>
                <ul>
                    <li>Songfang Han, Pedro Sander, <strong>Triangle Reordering for Reduced Overdraw in Animated Scenes</strong>.</li>
                    <li>Johannes Jendersie, David Kuri, Thorsten Grosch, <strong>Precomputed Illuminance Composition for Real-Time Global Illumination</strong>.</li>
                    <li>Christoph Peters, Cedrick Münstermann, Nico Wetzstein, Reinhard Klein, <strong>Beyond Hard Shadows: Moment Shadow Maps for Single Scattering, Soft Shadows and Translucent Occluders</strong>.</li>
                    <li>Alberto Jaspe Villanueva, Fabio Marton,	Enrico Gobbetti, <strong>SSVDAGs: Symmetry-aware Sparse Voxel DAGs</strong>.</li>
                    <li>Tobias Zirr, Anton Kaplanyan, <strong>Real-time Rendering of Procedural Multiscale Materials</strong>.</li>
                </ul>
                <p>Other I3D authors who were not explicitly invited are still encouraged to submit extensions of your work to JCGT if you have at least 25% new material to say about the practical implementation or evaluation above and beyond what was included in your original I3D paper.</p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Efficient Rendering of Volumetric Motion Blur Using Temporally Unstructured Volumes]]></title>
    <link>http://jcgt.org/published/0005/01/01</link>
    <guid>http://jcgt.org/published/0005/01/01</guid>
    <category>Paper</category>
    <pubDate>2016-01-31</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0005/01/01/icon.png">]]><![CDATA[<p>Magnus Wrenninge, Efficient Rendering of Volumetric Motion Blur Using Temporally Unstructured Volumes, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 6, no. 1, 1-34, 2016</p>]]><![CDATA[<p>We introduce <i>temporally unstructured volumes</i> (TUVs), a data structure for representing 4D volume data that is spatially structured but temporally unstructured, and <i>Reves</i>, an algorithm for constructing those volumes. The data structure supports efficient rendering of motion blur effects in volumes, including sub-frame motion. Reves is a volumetric extension to the classic Reyes algorithm, and produces TUV data sets through spatio-temporal stochastic sampling. Our method scales linearly and maximizes data locality and parallelism both during construction and rendering of volumes, making it suitable for ray tracing, and supports arbitrarily large input models. Compared to the current state-of-the-art in volumetric motion blur, our approach produces more accurate results using less memory and less time, and it can also provide high-quality re-timing of volumetric data sets, such as fluid simulations.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Announcing JCGT papers to be presented at I3D]]></title>
    <link>http://jcgt.org/news/2016-01-12-JCGTatI3D.html</link>
    <guid>http://jcgt.org/news/2016-01-12-JCGTatI3D.html</guid>
    <category>News</category>
    <pubDate>2016-01-12</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                The following JCGT papers will be presented this year at the
                <a href="http://www.i3dsymposium.org/">ACM SIGGRAPH Symposium on
                    Interactive 3D Graphics and Games</a>:</p>
                <ul>
                    <li>Cyril Crassin, David Luebke, Michael Mara, Morgan McGuire, Brent Oster, Peter-Pike Sloan, and Chris Wyman, <em><a href="published/0004/04/01/">CloudLight</a></em><a href="published/0004/04/01/">: A System for Amortizing Indirect Lighting in Real-Time Rendering</a>, <em>Journal of Computer Graphics Techniques (JCGT)</em>, vol. 4, no. 4, 1-27, 2015</li>
                    <li>Gwyneth A. Bradbury, Kartic Subr, Charalampos Koniaris, Kenny Mitchell, and Tim Weyrich, <a href="published/0004/04/02/">Guided Ecological Simulation for Artistic Editing of Plant Distributions in Natural Scenes</a>, <em>Journal of Computer Graphics Techniques (JCGT)</em>, vol. 4, no. 4, 28-53, 2015</li>
                    <li>Jefferson Amstutz, Christiaan Gribble, Johannes Günther, and Ingo Wald, <a href="published/0004/04/04/">An Evaluation of Multi-Hit Ray Traversal in a BVH using Existing First-Hit/Any-Hit Kernels</a>, <em>Journal of Computer Graphics Techniques (JCGT)</em>, vol. 4, no. 4, 72-90, 2015</li>
                </ul>
                <p><a href="http://i3dsymposium.github.io/">Register now</a> to see the presentations of these excellent papers and all of the rest of the great papers at I3D. Early registration rates are available through February 4th.</p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Efficient Ray Tracing Kernels for Modern CPU Architectures]]></title>
    <link>http://jcgt.org/published/0004/04/05</link>
    <guid>http://jcgt.org/published/0004/04/05</guid>
    <category>Paper</category>
    <pubDate>2015-12-26</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/04/05/icon.png">]]><![CDATA[<p>Valentin Fuetterling, Carsten Lojewski, Franz-Josef Pfreundt,  and Achim Ebert, Efficient Ray Tracing Kernels for Modern CPU Architectures, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 4, 90-111, 2015</p>]]><![CDATA[<p>The recent push for interactive global illumination (GI) has established the 4-ary bounding volume hierarchy (BVH4) as a highly efficient acceleration structure for incoherent ray queries with single rays. Ray stream techniques augment the fast single-ray traversal with increased utilization of CPU vector units and leverage memory bandwidth for batches of rays. Despite their success, the proposed implementations suffer from high bookkeeping cost and batch fragmentation, especially for small batch sizes. Furthermore, due to the focus on incoherent rays, optimization for highly coherent BVH4 ray queries, such as primary visibility, has received little attention. Our contribution is twofold: For coherent ray sets, we introduce a large packet traversal tailored to the BVH4 that is faster than the original BVH2 variant, and for incoherent ray batches we propose a novel implementation of ray streams which reduces the bookkeeping cost while strictly maintaining the preferred traversal order of individual rays. Both algorithms are designed around a fast traversal order look-up mechanism. We evaluate our work for primary visibility and diffuse GI and demonstrate significant performance gains over current state-of-the-art implementations.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[An Evaluation of Multi-Hit Ray Traversal in a BVH using Existing First-Hit/Any-Hit Kernels]]></title>
    <link>http://jcgt.org/published/0004/04/04</link>
    <guid>http://jcgt.org/published/0004/04/04</guid>
    <category>Paper</category>
    <pubDate>2015-12-16</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/04/04/icon.png">]]><![CDATA[<p>Jefferson Amstutz, Christiaan Gribble, Johannes G&uuml;nther,  and Ingo Wald, An Evaluation of Multi-Hit Ray Traversal in a BVH using Existing First-Hit/Any-Hit Kernels, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 4, 72-90, 2015</p>]]><![CDATA[<p>We explore techniques for multi-hit ray tracing in a bounding volume hierarchy (BVH) using existing ray traversal kernels and intersection callbacks. BVHs are problematic for implementing multi-hit ray traversal correctly due to the potential for spatially overlapping leaf nodes. We demonstrate that the intersection callback feature of modern, high performance ray-tracing APIs enable correct and efficient implementation of multi-hit ray tracing despite this concern. Moreover, the callback-based approach enables multi-hit ray tracing using existing, highly optimized BVH data structures, mitigating maintenance issues imposed by hand-tuned multi-hit traversal kernels across various hardware architectures. Results show that memory-bandwidth limitations and SIMD vector width of the target hardware platform dictate ideal hit-point memory layout, as well as the point at which sorting should occur, in order to maximize performance with existing BVH traversal algorithms.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Designing Gratin A GPU-Tailored Node-Based System]]></title>
    <link>http://jcgt.org/published/0004/04/03</link>
    <guid>http://jcgt.org/published/0004/04/03</guid>
    <category>Paper</category>
    <pubDate>2015-11-19</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/04/03/icon.png">]]><![CDATA[<p>Romain Vergne and Pascal Barla, Designing Gratin A GPU-Tailored Node-Based System, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 4, 54-71, 2015</p>]]><![CDATA[<p>Nodal architectures have received an ever-increasing endorsement in computer graphics in recent years. However, creating a node-based system specifically tailored to GPU-centered applications with real-time performance is not straightforward. In this paper, we discuss the design choices we took in the making of Gratin, our open-source node-based system. This information is useful to graphics experts interested in crafting their own node-based system working on the GPU, either starting from scratch or taking inspiration from our source code. We first detail the architecture of Gratin at the graph level, with data structures permitting real-time updates even for large pipelines. We then present the design choices we made at the node level, which provide for three levels of programmability and, hence, a gentle learning curve. Finally, we show the benefits of our approach by presenting use cases in research prototyping and teaching.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Guided Ecological Simulation for Artistic Editing of Plant Distributions in Natural Scenes]]></title>
    <link>http://jcgt.org/published/0004/04/02</link>
    <guid>http://jcgt.org/published/0004/04/02</guid>
    <category>Paper</category>
    <pubDate>2015-11-19</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/04/02/icon.png">]]><![CDATA[<p>Gwyneth A. Bradbury, Kartic Subr, Charalampos Koniaris, Kenny Mitchell,  and Tim Weyrich, Guided Ecological Simulation for Artistic Editing of Plant Distributions in Natural Scenes, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 4, 28-53, 2015</p>]]><![CDATA[<p>In this paper we present a novel approach to author vegetation cover of large natural scenes. Unlike stochastic scatter-instancing tools for plant placement (such as multi-class blue noise generators), we use a simulation based on ecological processes to produce layouts of plant distributions. In contrast to previous work on ecosystem simulation, however, we propose a framework of global and local editing operators that can be used to interact directly with the live simulation. The result facilitates an artist-directed workflow with both spatially and temporally-varying control over the simulation's output. We compare our result against random-scatter solutions, also employing such approaches as a seed to our algorithm. We demonstrate the versatility of our approach within an iterative authoring workflow, comparing it to typical artistic methods.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Partnership between JCGT and I3D]]></title>
    <link>http://jcgt.org/news/2015-11-02-I3D.html</link>
    <guid>http://jcgt.org/news/2015-11-02-I3D.html</guid>
    <category>News</category>
    <pubDate>2015-11-02</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                I'm pleased to announce the continued partnership with the
                <a href="http://www.i3dsymposium.org/">ACM SIGGRAPH Symposium on
                    Interactive 3D Graphics and Games</a> for the third year in a
                row. Authors of accepted JCGT papers that were published JCGT after
                2014-10-21 are elgible to present their work at the 2016 conference
                in Seattle Francisco. Presentation spots are limited and will be
                assigned in the order that we receive requests. If you are the
                author of an eligible JCGT paper, email your request to present at
                I3D
                to <a href="mailto:editor-in-chief@jcgt.org">editor-in-chief@jcgt.org</a>
                by November 30th.
                </p>
                <p>
                Exceptional papers from the conference proceedings will be selected
                by the board for followup fast-tracked journal publication in
                JCGT. Followup work requires at least 30% new material and should
                focus on practical and implementation issues. We will announce these
                papers on the last day of the conference during the awards ceremony.
                </p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[<i>CloudLight</i>: A System for Amortizing Indirect Lighting in Real-Time Rendering]]></title>
    <link>http://jcgt.org/published/0004/04/01</link>
    <guid>http://jcgt.org/published/0004/04/01</guid>
    <category>Paper</category>
    <pubDate>2015-10-15</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/04/01/icon.png">]]><![CDATA[<p>Cyril Crassin, David Luebke, Michael Mara, Morgan McGuire, Brent Oster, Peter Shirley, Peter-Pike Sloan,  and Chris Wyman, <i>CloudLight</i>: A System for Amortizing Indirect Lighting in Real-Time Rendering, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 4, 1-27, 2015</p>]]><![CDATA[<p>This paper describes the <i>CloudLight</i> system for computing indirect lighting asynchronously on an abstracted, computational "cloud," in support of real-time rendering for interactive 3D applications on a local client device.</p>

<p>We develop and evaluate remote-rendering systems for three different indirect illumination strategies: path-traced irradiance ("light") maps, photon mapping, and cone-traced voxels. We report results for robustness and scalability under realistic workloads by using assets from existing commercial games, scaling up to 50 simultaneous clients, and deployed on commercial hardware (NVIDIA GeForce GRID) and software (OptiX) infrastructure.</p>

<p>Remote illumination scales well enough for deployment under each of the three methods; however, they have unique characteristics. Streaming irradiance maps appear practical today for a laptop client with a distant, rack-mounted server or for a head-mounted virtual reality client with a nearby PC server. Voxels are well-suited to amortizing illumination within a server farm for mobile clients. Photons offer the best quality for more powerful clients.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Bonsai: Rapid Bounding Volume Hierarchy Generation using Mini Trees]]></title>
    <link>http://jcgt.org/published/0004/03/02</link>
    <guid>http://jcgt.org/published/0004/03/02</guid>
    <category>Paper</category>
    <pubDate>2015-09-22</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/03/02/icon.png">]]><![CDATA[<p>P. Ganestam, R. Barringer, M. Doggett,  and T. Akenine-M&ouml;ller, Bonsai: Rapid Bounding Volume Hierarchy Generation using Mini Trees, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 3, 23-42, 2015</p>]]><![CDATA[<p>We present an algorithm, called Bonsai, for rapidly building bounding volume hierarchies for ray tracing. Our method starts by computing midpoints of the triangle bounding boxes and then performs a rough hierarchical top-down split using the midpoints, creating triangle groups with tight bounding boxes. For each triangle group, a mini tree is built using an improved sweep SAH method. Once all mini trees have been built, we use them as leaves when building the top tree of the bounding volume hierarchy. We also introduce a novel and inexpensive optimization technique, called mini-tree pruning, that can be used to detect and improve poorly built parts of the tree. We achieve a little better than 100% in ray-tracing performance compared to a &quot;ground truth&quot; greedy top-down sweep SAH method, and our build times are the lowest we have seen with comparable tree quality.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Globs: A Primitive Shape for Graceful Blends Between Circles]]></title>
    <link>http://jcgt.org/published/0004/03/01</link>
    <guid>http://jcgt.org/published/0004/03/01</guid>
    <category>Paper</category>
    <pubDate>2015-08-31</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/03/01/icon.png">]]><![CDATA[<p>Andrew Glassner, Globs: A Primitive Shape for Graceful Blends Between Circles, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 3, 1-22, 2015</p>]]><![CDATA[<p>Geometric primitives are a staple of illustration systems. They provide artists with shapes that are easy to understand and create, yet they can be easily combined and modified into more complex structures. Here we introduce a new 2D geometric primitive called a <i>glob</i>. It is a smooth curved shape that blends two circles of any radii. Globs are controlled with a few parameters which have immediate geometric interpretations.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Marc Olano is the new Editor-in-Chief]]></title>
    <link>http://jcgt.org/news/2015-08-09-newEIC.html</link>
    <guid>http://jcgt.org/news/2015-08-09-newEIC.html</guid>
    <category>News</category>
    <pubDate>2015-08-09</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                As my own term as JCGT Editor-in-Chief draws to a close, it is my 
                great pleasure to perform my final duty of introducing the next EIC. 
                The advisory board unanimously appointed <a href="http://www.csee.umbc.edu/~olano/">Dr. Marc Olano</a>, of the University of Maryland, Baltimore County as 
                the new EIC. Marc is currently an Associate Professor, Director of the 
                Computer Science Game Development Track at UMBC, and Co-director of the VANGOGH
                research lab.
                </p>
                <p>
                Marc has an impressive record in both industry and academia. His
                personal area of expertise is hardware-assisted real-time rendering,
                including shading languages. He
                pioneered real-time shading languages at SGI, and has worked at
                Firaxis, Kodak, UNC, NCSA, and Rhino Robotics.
                </p>
                <p>
                It has been a pleasure serving the graphics community as EIC
                and working closely with the truly fantastic editors and authors
                of JCGT. I know that Marc will build on the great publication that
                we've built together as a community and lead us to new heights.
                <br> <br>
                --Morgan McGuire
                </p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Practical Layered Reconstruction for Defocus and Motion Blur]]></title>
    <link>http://jcgt.org/published/0004/02/04</link>
    <guid>http://jcgt.org/published/0004/02/04</guid>
    <category>Paper</category>
    <pubDate>2015-06-10</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/02/04/icon.png">]]><![CDATA[<p>Jon Hasselgren, Jacob Munkberg,  and Karthik Vaidyanathan, Practical Layered Reconstruction for Defocus and Motion Blur, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 2, 45-58, 2015</p>]]><![CDATA[<p>We present several practical improvements to a recent layered reconstruction algorithm for defocus and motion blur. We leverage hardware texture filters, layer merging and sparse statistics to reduce computational complexity. Furthermore, we restructure the algorithm for better load-balancing on graphics processors, albeit at increased memory usage. We show run time reductions by 1/2 to 1/5 with minimal change in image quality vs. previous techniques, bringing this reconstruction technique to the real-time domain.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Interpreting Alpha]]></title>
    <link>http://jcgt.org/published/0004/02/03</link>
    <guid>http://jcgt.org/published/0004/02/03</guid>
    <category>Paper</category>
    <pubDate>2015-05-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/02/03/icon.png">]]><![CDATA[<p>Andrew Glassner, Interpreting Alpha, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 2, 30-44, 2015</p>]]><![CDATA[<p>Associating alpha values with pixel colors is an important technique in computer graphics, allowing us to create complex composite images. However, the interpretation of the meaning of alpha is often a fluid concept, switching between coverage (a measure of area) and opacity (a measure of a color) based on convenience. It can appear very strange that a single number could represent both of these qualitatively different values at the same time. By tracking coverage and opacity separately through the compositing process, we find that alpha is actually the product of coverage and opacity, so it is only one of these terms if we assume that the other has the value 1. This leads to a simple and consistent understanding of the internal structure of pixels that are described by color and alpha.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Procedural Modeling of Cave-like Channels]]></title>
    <link>http://jcgt.org/published/0004/02/02</link>
    <guid>http://jcgt.org/published/0004/02/02</guid>
    <category>Paper</category>
    <pubDate>2015-05-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/02/02/icon.png">]]><![CDATA[<p>Alex Pytel and Stephen Mann, Procedural Modeling of Cave-like Channels, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 2, 10-29, 2015</p>]]><![CDATA[<p>Hydraulic erosion that takes place underground leads to the formation of complex channel networks whose morphology emerges from the dynamic behavior of each channel, based on the presence of other channels nearby. Our approach to the problem of modeling such channel networks for computer graphics application involves a self-organized model of channel development and a two-stage simulation for constructing the geometry of the channels. By emphasizing self-organization of flow and pressure, our simulation is able to reproduce several types of channel behavior known from hydrogeomorphology, such as tributary capture.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Modeling Real-World Terrain with Exponentially Distributed Noise]]></title>
    <link>http://jcgt.org/published/0004/02/01</link>
    <guid>http://jcgt.org/published/0004/02/01</guid>
    <category>Paper</category>
    <pubDate>2015-05-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/02/01/icon.png">]]><![CDATA[<p>Ian Parberry, Modeling Real-World Terrain with Exponentially Distributed Noise, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 2, 1-9, 2015</p>]]><![CDATA[<p>A statistical analysis of elevation data from a 160,000 square kilometer region at horizontal intervals from 5 meters up to 164 kilometers finds that terrain gradients appear to be exponentially distributed. Simple modifications to the Perlin noise algorithm and the amortized noise algorithm change the gradient distribution in each octave to an exponential distribution, resulting in varied and interesting procedurally generated terrain.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Direct Ray Tracing of Full-Featured Subdivision Surfaces <wbr/>with Bezier Clipping]]></title>
    <link>http://jcgt.org/published/0004/01/04</link>
    <guid>http://jcgt.org/published/0004/01/04</guid>
    <category>Paper</category>
    <pubDate>2015-03-31</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/01/04/icon.png">]]><![CDATA[<p>Takahito Tejima, Masahiro Fujita,  and Toru Matsuoka, Direct Ray Tracing of Full-Featured Subdivision Surfaces <wbr/>with Bezier Clipping, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 1, 69-83, 2015</p>]]><![CDATA[<p>We present a novel, direct ray-tracing method for rendering Catmull-Clark subdivision surfaces. Our method resolves an intersection with Bezier patches without performing of polygon tessellation as done in most traditional renderers. It supports the full set of RenderMan and OpenSubdiv subdivision surface features, including hierarchical edits, creases, semi-sharp creases, corners, the chaikin rule, and holes.</p>

<p>Our method is fast (compared to traditional CPU ray tracers), robust and crack-free, has very low memory requirements, and minimizes BVH construction and traversal overhead for high subdivision levels. It computes ray-surface intersection using Bezier clipping on patches obtained from the full-featured subdivision surfaces.</p>

<p>In this paper, we explain our implementation and evaluate its performance and memory characteristics for primary ray casting. We further show examples of indirect illumination using secondary ray casting and note that because the method does not require view-dependent pre-tessellation, may offer further advantages in that case.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Building a Balanced <i>k</i>-d Tree in O(<i>kn</i> log <i>n</i>) Time]]></title>
    <link>http://jcgt.org/published/0004/01/03</link>
    <guid>http://jcgt.org/published/0004/01/03</guid>
    <category>Paper</category>
    <pubDate>2015-03-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/01/03/icon.png">]]><![CDATA[<p>Russell A. Brown, Building a Balanced <i>k</i>-d Tree in O(<i>kn</i> log <i>n</i>) Time, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 1, 50-68, 2015</p>]]><![CDATA[<p>The original description of the <i>k</i>-d tree recognized that rebalancing techniques, such as are used to build an AVL tree or a red-black tree, are not applicable to a <i>k</i>-d tree. Hence, in order to build a balanced <i>k</i>-d tree, it is necessary to find the median of the data for each recursive partition. The choice of selection or sort that is used to find the median for each subdivision strongly influences the computational complexity of building a <i>k</i>-d tree.</p>

<p>This paper discusses an alternative algorithm that builds a balanced <i>k</i>-d tree by presorting the data in each of <i>k</i> dimensions prior to building the tree. It then preserves the order of these <i>k</i> sorts during tree construction and thereby avoids the requirement for any further sorting. Moreover, this algorithm is amenable to parallel execution via multiple threads. Compared to an algorithm that finds the median for each recursive subdivision, this presorting algorithm has equivalent performance for four dimensions and better performance for three or fewer dimensions.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[New Editorial Board Members]]></title>
    <link>http://jcgt.org/news/2015-02-15-newboard.html</link>
    <guid>http://jcgt.org/news/2015-02-15-newboard.html</guid>
    <category>News</category>
    <pubDate>2015-02-15</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                As we enter our third year of publishing JCGT, I welcome our new members of the editorial board:
                </p><ul>
                    <li>Angelo Pesce, Activision</li>
                    <li>Wenzel Jacob, EPFL</li>
                    <li>Yoshiharu Gotanda, tri-Ace</li>
                    <li>Wojciech Jarosz, Dartmouth College</li>
                    <li>Oliver Wang, Disney Research Zurich</li>
                    <li>Alexander Wilkie, Charles University</li>
                </ul>
                <p></p>
                <p>
                Editors are nominated annually from among industry professionals and academic researchers who, in the spirit of the journal, have significantly advanced the field both through their own work and through service such as reviewing, public talks, conference committees, textbooks, and courses. Nominees voted in by the editorial board serve renewable two year terms as corresponding editors for JCGT.</p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Real-Time Grass (and Other Procedural Objects) on Terrain]]></title>
    <link>http://jcgt.org/published/0004/01/02</link>
    <guid>http://jcgt.org/published/0004/01/02</guid>
    <category>Paper</category>
    <pubDate>2015-02-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/01/02/icon.png">]]><![CDATA[<p>Dimitris Papavasiliou, Real-Time Grass (and Other Procedural Objects) on Terrain, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 1, 26-49, 2015</p>]]><![CDATA[<p>Dense vegetation is ubiquitous in nature, but challenging to render due to its great geometric complexity. This paper presents a simple approach to vegetation rendering, based on modeling each plant separately using tessellation hardware in modern GPUs. To reduce scene complexity with as little loss in image quality as possible, we use continuous level-of-detail techniques. We do this both at the level of individual plants and the whole distribution of plants on the terrain. Additionally, we describe how to control the distribution of plants over the terrain, using a technique that does not require any additional maps, other than the terrain texture map, and supports multiple species of vegetation with little overhead. In order to leverage the flexibility afforded, we present a method of using separate tessellation and geometry programs for each vegetation species, which enables the efficient rendering of diverse vegetation, or any other objects, provided that their geometry can be generated through tessellation and geometry shaders on the GPU. The basic technique is applicable to any static triangulated mesh, but it can be adapted to the dynamic meshes typical in terrain applications. Details are given for the implementation on ROAM-based terrain.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[BVH Split Strategies for Fast Distance Queries]]></title>
    <link>http://jcgt.org/published/0004/01/01</link>
    <guid>http://jcgt.org/published/0004/01/01</guid>
    <category>Paper</category>
    <pubDate>2015-01-27</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0004/01/01/icon.png">]]><![CDATA[<p>Robin Ytterlid and Evan Shellshear, BVH Split Strategies for Fast Distance Queries, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 5, no. 1, 1-25, 2015</p>]]><![CDATA[<p>This paper presents a number of strategies focused on improving bounding volume hierarchies (BVHs) to accelerate distance queries. We present two classes of BVH split strategies that are designed specifically for fast distance queries, but also work well for collision detection and we compare their performance to existing strategies as well as the new ones introduced here. The classes of split strategies improve distance query performance by constructing a BVH to provide better SSE-utilization during proximity queries and also improve the quality of bounding volumes of the BVH. When combined with each other, our two approaches can offer up to five times better distance query performance and twice as fast collision queries than standard split methods.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[JCGT Papers to Appear at I3D 2015]]></title>
    <link>http://jcgt.org/news/2015-01-12-JCGTatI3D.html</link>
    <guid>http://jcgt.org/news/2015-01-12-JCGTatI3D.html</guid>
    <category>News</category>
    <pubDate>2015-01-12</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[

                The following JCGT papers will be presented at I3D 2015. This

                <ul>
                    <li><a href="published/0003/04/05/">Fast Distance Queries for Triangles, Lines, and Points using SSE Instructions</a></li>
                    <li><a href="published/0003/04/06/">Real-time Radiance Caching using Chrominance Compression</a></li>
                    <li><a href="published/0003/04/02/">A Simple Method for Correcting Facet Orientations in Polygon Meshes Based on Ray Casting</a></li>
                    <li><a href="published/0003/04/04/">Efficient GPU Screen-Space Ray Tracing</a></li>
                    <li><a href="published/0004/01/01/">BVH Split Strategies for Fast Distance Queries</a></li>
                </ul>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Adaptive Depth Bias for Shadow Maps]]></title>
    <link>http://jcgt.org/published/0003/04/08</link>
    <guid>http://jcgt.org/published/0003/04/08</guid>
    <category>Paper</category>
    <pubDate>2014-12-19</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/04/08/icon.png">]]><![CDATA[<p>Hang Dou, Yajie Yan, Ethan Kerzner, Zeng Dai,  and Chris Wyman, Adaptive Depth Bias for Shadow Maps, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 4, 146-162, 2014</p>]]><![CDATA[<p>Unexpected shadow acne and shadow detachment due to limited storage are pervasive under traditional shadow mapping. In this paper, we present a method to eliminate false self-shadowing through adaptive depth bias. By estimating the potential shadow caster for each fragment, we compute the minimal depth bias needed to avoid false self-shadowing. Our method is simple to implement and compatible with other extensions to the shadow mapping algorithm, such as cascaded shadow map and adaptive shadow map. Moreover, our method works for both 2D shadow maps and 3D binary shadow volumes.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Fast PVRTC Compression using Intensity Dilation]]></title>
    <link>http://jcgt.org/published/0003/04/07</link>
    <guid>http://jcgt.org/published/0003/04/07</guid>
    <category>Paper</category>
    <pubDate>2014-12-19</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/04/07/icon.png">]]><![CDATA[<p>Pavel Krajcevski and Dinesh Manocha, Fast PVRTC Compression using Intensity Dilation, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 4, 132-145, 2014</p>]]><![CDATA[<p>We present an algorithm for quickly compressing textures into PVRTC, the Imagination PowerVR Texture Compression format. Our formulation is based on intensity dilation and exploits the notion that the most important features of an image are those with high contrast ratios. We present an algorithm that uses morphological operations to distribute the areas of high contrast into the compression parameters. We use our algorithm to compress into PVRTC textures and compare our performance with prior techniques in terms of speed and quality.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Real-time Radiance Caching using <wbr/>Chrominance Compression]]></title>
    <link>http://jcgt.org/published/0003/04/06</link>
    <guid>http://jcgt.org/published/0003/04/06</guid>
    <category>Paper</category>
    <pubDate>2014-12-16</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/04/06/icon.png">]]><![CDATA[<p>Kostas Vardis, Georgios Papaioannou,  and Anastasios Gkaravelis, Real-time Radiance Caching using <wbr/>Chrominance Compression, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 4, 111-131, 2014</p>]]><![CDATA[This paper introduces the idea of expressing the radiance field in luminance/chrominance values and encoding the directional chrominance in lower detail. Reducing the spherical harmonics coefficients for the chrominance components allows the storage of luminance in higher order spherical harmonics in the same memory budget resulting in finer representation of intensity transitions. We combine the radiance field chrominance compression with an optimized cache population scheme, by generating cache points only at locations, which are guaranteed to contribute to the reconstructed surface irradiance. These computation and storage savings allow the use of higher-order spherical harmonics representation to sufficiently capture and reconstruct the directionality of diffuse irradiance, while maintaining fast and customizable performance. We exploit this radiance representation in a low-cost real-time radiance caching scheme, with support for arbitrary light bounces and view-independent indirect occlusion and showcase the improvements in highly complex and dynamic environments. Furthermore, our general qualitative evaluation indicates benefits for offline rendering application as well.]]></description>
  </item>
  <item>
    <title><![CDATA[Fast Distance Queries for Triangles, Lines, <wbr/>and Points using SSE Instructions]]></title>
    <link>http://jcgt.org/published/0003/04/05</link>
    <guid>http://jcgt.org/published/0003/04/05</guid>
    <category>Paper</category>
    <pubDate>2014-12-13</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/04/05/icon.png">]]><![CDATA[<p>Evan Shellshear and Robin Ytterlid, Fast Distance Queries for Triangles, Lines, <wbr/>and Points using SSE Instructions, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 4, 86-110, 2014</p>]]><![CDATA[This paper presents a suite of routines for computing the distance between combinations of triangles, lines and points that we optimized for the x86 SSE SIMD (vector) instruction set. We measured between two and seven times throughput improvement over the naive non-SSE optimized routines.]]></description>
  </item>
  <item>
    <title><![CDATA[Efficient GPU Screen-Space Ray Tracing]]></title>
    <link>http://jcgt.org/published/0003/04/04</link>
    <guid>http://jcgt.org/published/0003/04/04</guid>
    <category>Paper</category>
    <pubDate>2014-12-09</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/04/04/icon.png">]]><![CDATA[<p>Morgan McGuire and Michael Mara, Efficient GPU Screen-Space Ray Tracing, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 4, 73-85, 2014</p>]]><![CDATA[We present an efficient GPU solution for screen-space 3D ray tracing against a depth buffer by adapting the perspective-correct DDA line rasterization algorithm. Compared to linear ray marching, this ensures sampling at a contiguous set of pixels and no oversampling. This paper provides for the first time full implementation details of a method that has been proven in production of recent major game titles. After explaining the optimizations, we then extend the method to support multiple depth layers for robustness. We include GLSL code and examples of pixel-shader ray tracing for several applications.]]></description>
  </item>
  <item>
    <title><![CDATA[Artist Friendly Metallic Fresnel]]></title>
    <link>http://jcgt.org/published/0003/04/03</link>
    <guid>http://jcgt.org/published/0003/04/03</guid>
    <category>Paper</category>
    <pubDate>2014-12-09</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/04/03/icon.png">]]><![CDATA[<p>Ole Gulbrandsen, Artist Friendly Metallic Fresnel, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 4, 64-72, 2014</p>]]><![CDATA[<p>Light reflecting off metallic surfaces is described by the Fresnel equations [Born and Wolf 1999], which are controlled by the complex index of refraction &eta; = <i>n</i> + <i>ik</i>.</p>

<p>Together, <i>n</i> and <i>k</i> determine the two characteristics of the Fresnel curve for a material: the reflectivity at normal incidence and how quickly it fades to white at grazing angles. However, this parameterization presents some artistic challenges because both characteristics depend on both parameters. An artist would ideally manipulate each property independently and have them on a unit scale. This paper describes a remapping for the approximated unpolarized Fresnel equations of <i>n</i> and <i>k</i> to the more intuitive <i>reflectivity</i>, <i>r</i>, and <i>edgetint</i>, <i>g</i>, both in the range from 0 to 1.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[A Simple Method for Correcting <wbr/>Facet Orientations in Polygon Meshes <wbr/>Based on Ray Casting]]></title>
    <link>http://jcgt.org/published/0003/04/02</link>
    <guid>http://jcgt.org/published/0003/04/02</guid>
    <category>Paper</category>
    <pubDate>2014-11-21</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/04/02/icon.png">]]><![CDATA[<p>Kenshi Takayama, Alec Jacobson, Ladislav Kavan,  and Olga Sorkine-Hornung, A Simple Method for Correcting <wbr/>Facet Orientations in Polygon Meshes <wbr/>Based on Ray Casting, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 4, 53-63, 2014</p>]]><![CDATA[<p>We present a method for fixing incorrect orientations of facets in an input polygon mesh, a problem often seen in popular 3D model repositories, such that the front side of facets is visible from viewpoints outside of a solid shape represented or implied by the mesh. As opposed to previously proposed methods which are rather complex and hard to reproduce, our method is very simple, only requiring sampling visibilities by shooting many rays. We also propose a simple heuristic to handle interior facets that are invisible from exterior viewpoints. Our method is evaluated extensively with the SHREC'10 Generic 3D Warehouse dataset containing manually designed meshes, and is demonstrated to be very effective.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Antialiased 2D Grid, Marker, and Arrow Shaders]]></title>
    <link>http://jcgt.org/published/0003/04/01</link>
    <guid>http://jcgt.org/published/0003/04/01</guid>
    <category>Paper</category>
    <pubDate>2014-11-02</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/04/01/icon.png">]]><![CDATA[<p>Nicolas P. Rougier, Antialiased 2D Grid, Marker, and Arrow Shaders, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 4, 1-52, 2014</p>]]><![CDATA[<p>Grids, markers, and arrows are important components in scientific visualisation. Grids are widely used in scientific plots and help visually locate data. Markers visualize individual points and aggregated data. Quiver plots show vector fields, such as a velocity buffer, through regularly-placed arrows. Being able to draw these components quickly is critical if one wants to offer interactive visualisation. This article provides algorithms with GLSL implementations for drawing grids, markers, and arrows using implicit surfaces that make it possible quickly render pixel-perfect antialiased shapes.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Physics-Aware Voronoi Fracture<br/>with Example-Based Acceleration]]></title>
    <link>http://jcgt.org/published/0003/03/03</link>
    <guid>http://jcgt.org/published/0003/03/03</guid>
    <category>Paper</category>
    <pubDate>2014-10-07</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/03/03/icon.png">]]><![CDATA[<p>Sara C. Schvartzman and Miguel A. Otaduy, Physics-Aware Voronoi Fracture<br/>with Example-Based Acceleration, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 3, 34-54, 2014</p>]]><![CDATA[<p>This paper provides implementation details of the algorithm proposed in Schvartzman and Otaduy [2014] to simulate brittle fracture. We cast brittle fracture as the computation of a high-dimensional centroidal Voronoi diagram (CVD), where the distribution of fracture fragments is guided by the deformation field of the fractured object. We accelerate the fracture animation process with example-based learning of the fracture degree and a highly parallel tessellation algorithm.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[JCGT-I3D 2015 Partnership]]></title>
    <link>http://jcgt.org/news/2014-09-29-I3D.html</link>
    <guid>http://jcgt.org/news/2014-09-29-I3D.html</guid>
    <category>News</category>
    <pubDate>2014-09-29</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                I'm pleased to announce that we're continuing the partnership
                with
                the <a href="http://www.csee.umbc.edu/csee/research/vangogh/I3D2015/">ACM
                    SIGGRAPH Symposium on Interactive 3D Graphics and Games</a>
                for next year. Authors of accepted JCGT papers that were
                submitted to JCGT between 2013-10-23 and 2014-10-21 are
                elgible to present their work at the 2015 conference in San
                Francisco. Presentation spots are limited and will be assigned
                in the order that we receive requests.
                </p>
                <p>
                Exceptional papers from the conference proceedings will be
                selected by the board for followup fast-tracked journal
                publication in JCGT. Followup work requires at least 30% new
                material and should focus on practical and implementation
                issues. We will announce these papers on the last day of the
                conference during the awards ceremony.
                </p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Filter-based Real-time Single Scattering <wbr/>using Rectified Shadow Maps]]></title>
    <link>http://jcgt.org/published/0003/03/02</link>
    <guid>http://jcgt.org/published/0003/03/02</guid>
    <category>Paper</category>
    <pubDate>2014-08-16</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/03/02/icon.png">]]><![CDATA[<p>Oliver Klehm, Hans-Peter Seidel,  and Elmar Eisemann, Filter-based Real-time Single Scattering <wbr/>using Rectified Shadow Maps, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 3, 7-34, 2014</p>]]><![CDATA[<p>Light scattering due to participating media is a complex process and difficult to simulate in real time because light can be scattered towards the camera from everywhere in space. In this work, we replace the usually-employed ray-marching with an efficient ray-independent texture filtering process, which leads to a significant acceleration. Our algorithm assumes single scattering media and takes a rectified shadow map as input, which can also be efficiently rectified by a new scheme, which we propose in this work. Our method is fast and almost resolution independent, while producing near-reference results. For this reason, it is a good candidate for performance-critical applications, such as games.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Optimized Phong and Blinn-Phong Glossy Highlights]]></title>
    <link>http://jcgt.org/published/0003/03/01</link>
    <guid>http://jcgt.org/published/0003/03/01</guid>
    <category>Paper</category>
    <pubDate>2014-07-20</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/03/01/icon.png">]]><![CDATA[<p>Linus K&auml;llberg and Thomas Larsson, Optimized Phong and Blinn-Phong Glossy Highlights, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 3, 1-6, 2014</p>]]><![CDATA[<p>We describe a simple technique to compute the threshold at which glossy lobes in Phong or Blinn-Phong-like reflection models are negligible and thus avoid the cost of an exponentiation and a few other operations. The technique is trivial to incorporate into existing shaders. For existing shaders that already branch when the glossy term is zero, it introduces little overhead. SIMD or GPU shaders that do not already branch will incur the overhead of a branch and may experience divergence</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs]]></title>
    <link>http://jcgt.org/published/0003/02/03</link>
    <guid>http://jcgt.org/published/0003/02/03</guid>
    <category>Paper</category>
    <pubDate>2014-06-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/02/03/icon.png">]]><![CDATA[<p>Eric Heitz, Understanding the Masking-Shadowing Function in Microfacet-Based BRDFs, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 2, 48-107, 2014</p>]]><![CDATA[<p>We provide a new presentation of the masking-shadowing functions (or geometric attenuation factors) in microfacet-based BRDFs and answer some common questions about their applications. Our main motivation is to define a correct (geometrically indicated), physically based masking function for application in microfacet models, as well as the properties that function should exhibit. Indeed, several different masking functions are often presented in the literature and making the right choice is not always obvious. We start by showing that physically based masking functions are constrained by the projected area of the visible micro-surface onto the outgoing direction. We use this property to derive the distribution of visible normals from the microsurface, whose normalization factor is the masking function. We then show how the common form of microfacet-based BRDFs emerges from this distribution. As a consequence, the masking function is related to the correct normalization of microfacet-based BRDFs. However, while the correct masking function satisfies these normalization constraints, its explicit form is can only be determined for a given microsurface profile.</p>

<p>Our derivation emphasizes that under the assumptions of their respective microsurface profiles, both Smith's function and the V-cavity masking function are correct. However, we show that the V-cavity microsurface yields results that miss the effect of occlusion, making it analogous to the shading of a normal map instead of a displacement map. This observation explains why the V-cavity model yields incorrect glossy highlights at grazing view angles.</p>

<p>We also review other common masking functions, which are not associated with a micro-surface profile and thus are not physically based. The insights gained from these observations motivate new research directions in the field of microfacet theory. For instance, we show that masking functions are stretch invariant and we show how this property can be used to derive the masking function for anisotropic microsurfaces in a straightforward way. We also discuss future work such as the incorporation of multiple scattering on the microsurface into BRDF models.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Amortized Noise]]></title>
    <link>http://jcgt.org/published/0003/02/02</link>
    <guid>http://jcgt.org/published/0003/02/02</guid>
    <category>Paper</category>
    <pubDate>2014-06-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/02/02/icon.png">]]><![CDATA[<p>Ian Parberry, Amortized Noise, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 2, 31-47, 2014</p>]]><![CDATA[<p>Perlin noise is often used to compute a regularly spaced grid of noise values. The <i>amortized noise</i> algorithm takes advantage of this regular call pattern to amortize the computation cost of floating-point computations over interpolated points using dynamic programming techniques. The 2D amortized noise algorithm uses a factor of 17/3 = 5.67 fewer floating-point multiplications than the 2D Perlin noise algorithm, resulting in a speedup by a factor of approximately 3.6-4.8 in practice on available desktop and laptop computing hardware. The 3D amortized noise algorithm uses a factor of 40/7 = 5.71 fewer floating-point multiplications than the 3D Perlin noise algorithm; however, the increasing overhead for the initialization of tables limits the speedup factor achieved in practice to around 2.25. Improvements to both 2D Perlin noise and 2D amortized noise include making them infinite and non-repeating by replacing the permutation table with a perfect hash function, and making them smoother by using quintic splines instead of cubic splines. While these improvements slow down 2D Perlin noise down by a factor of approximately 32-92, they slow 2D amortized noise by a negligible amount.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Survey of Efficient Representations for Independent Unit Vectors]]></title>
    <link>http://jcgt.org/published/0003/02/01</link>
    <guid>http://jcgt.org/published/0003/02/01</guid>
    <category>Paper</category>
    <pubDate>2014-04-17</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/02/01/icon.png">]]><![CDATA[<p>Zina H. Cigolle, Sam Donow, Daniel Evangelakos, Michael Mara, Morgan McGuire,  and Quirin Meyer, Survey of Efficient Representations for Independent Unit Vectors, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 2, 1-30, 2014</p>]]><![CDATA[<p>The bandwidth cost and memory footprint of vector buffers are limiting factors for GPU rendering in many applications. This article surveys time- and space-efficient representations for the important case of non-register, in-core, statistically independent unit vectors, with emphasis on GPU encoding and decoding. These representations are appropriate for unit vectors in a geometry buffer or attribute stream--where no correlation between adjacent vectors is easily available--or for those in a normal map where quality higher than that of DXN is required. We do not address out-of-core and register storage vectors because they favor minimum-space and maximum-speed alternatives, respectively.</p>

<p>We evaluate precision and its qualitative impact across these techniques and give CPU reference implementations. For those methods with good quality and reasonable performance, we provide optimized GLSL GPU implementations of encoding and decoding.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Designer Worlds: Procedural Generation of Infinite Terrain from Real-World Elevation Data]]></title>
    <link>http://jcgt.org/published/0003/01/04</link>
    <guid>http://jcgt.org/published/0003/01/04</guid>
    <category>Paper</category>
    <pubDate>2014-03-11</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/01/04/icon.png">]]><![CDATA[<p>Ian Parberry, Designer Worlds: Procedural Generation of Infinite Terrain from Real-World Elevation Data, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 1, 74-85, 2014</p>]]><![CDATA[<p>The standard way to procedurally generate random terrain for video games and other applications is to post-process the output of a fast noise generator such as Perlin noise. Tuning the post-processing to achieve particular types of terrain requires game designers to be reasonably well-trained in mathematics. A well-known variant of Perlin noise called value noise is used in a process accessible to designers trained in geography to generate geotypical terrain based on elevation statistics drawn from widely available sources such as the United States Geographical Service. A step-by-step process for downloading and creating terrain from real-world USGS elevation data is described, and an implementation in C++ is given</p>]]></description>
  </item>
  <item>
    <title><![CDATA[JCGT Papers at I3D'14]]></title>
    <link>http://jcgt.org/news/2014-03-07-JCGTatI3D.html</link>
    <guid>http://jcgt.org/news/2014-03-07-JCGTatI3D.html</guid>
    <category>News</category>
    <pubDate>2014-03-07</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                The authors of the following previously-published JCGT papers will present their work at
                the <a href="http://i3dsymposium.org">ACM Symposium on Interactive 3D
                    Graphics and Games (I3D) 2014 conference</a> next week:

                </p><ul>
                    <li>Gribble, Naveros, and Kerzner,  <a href="published/0003/01/01/">Multi-Hit Ray Traversal</a></li>
                    <li>Rauwendaal and Bailey, <a href="published/0002/01/02/">Hybrid Computational Voxelization Using the Graphics Pipeline</a></li>
                    <li>Sloan, <a href="published/0002/02/06/">Efficient Spherical Harmonic Evaluation</a></li>
                    <li>McGuire and Bavoil, <a href="published/0002/02/09/">Weighted Blended Order-Independent Transparency</a></li>
                    <li>Johnsson and Akenine-Möller, <a href="published/0003/01/03/">Measuring Per-Frame Energy Consumption of Real-Time Graphics Applications</a></li>
                    <li>Toth, <a href="published/0002/02/07/">Avoiding Texture Seams by Discarding Filter Taps</a></li>
                </ul>

                After the presentations, we will post their presentation
                slides and any additional results on the papers' JCGT pages.
                <p>
                At the conference, we will announce papers from the I3D
                proceedings that are invited to publish extended journal
                versions in JCGT with an accelerated editing process. All authors
                of work at any graphics venure are of course always welcome to submit
                followup work to JCGT through the regular review process.
                At I3D'14, we invite community discussion and comments particularly
                to Morgan McGuire (JCGT Editor-in-Chief) and John Keyser and Pedro Sander (I3D'15 Papers Chairs)
                on the impact of this
                one-year JCGT-I3D partnership and the possibility of extending
                it in the future.
                </p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Measuring Per-Frame Energy Consumption of Real-Time Graphics Applications]]></title>
    <link>http://jcgt.org/published/0003/01/03</link>
    <guid>http://jcgt.org/published/0003/01/03</guid>
    <category>Paper</category>
    <pubDate>2014-03-05</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/01/03/icon.png">]]><![CDATA[<p>Bj&ouml;rn Johnsson and Tomas Akenine-M&ouml;ller, Measuring Per-Frame Energy Consumption of Real-Time Graphics Applications, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 1, 60-73, 2014</p>]]><![CDATA[<p>Energy and power efficiency are becoming important topics within the graphics community. In this paper, we present a simple, straightforward method for measuring per-frame energy consumption of real-time graphics workloads. The method is non-invasive, meaning that source code is not needed, which makes it possible to measure on a much wider range of applications. We also discuss certain behaviors of the measured platforms that can affect energy measurements, e.g., what happens when calling <code>glFinish()</code>, which ensures that all issued graphics commands are finished executing. Measurements are done both on a smartphone and on CPUs with integrated graphics processors</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Survey of Texture Mapping Techniques for Representing and Rendering Volumetric Mesostructure]]></title>
    <link>http://jcgt.org/published/0003/01/02</link>
    <guid>http://jcgt.org/published/0003/01/02</guid>
    <category>Paper</category>
    <pubDate>2014-02-27</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/01/02/icon.png">]]><![CDATA[<p>Charalampos Koniaris, Darren Cosker, Xiaosong Yang,  and Kenny Mitchell, Survey of Texture Mapping Techniques for Representing and Rendering Volumetric Mesostructure, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 1, 18-59, 2014</p>]]><![CDATA[<p>Representation and rendering of volumetric mesostructure using texture mapping can potentially allow the display of highly detailed, animated surfaces at a low performance cost. Given the need for consistently more detailed and dynamic worlds rendered in real-time, volumetric texture mapping now becomes an area of great importance.</p>

<p>In this survey, we review the developments of algorithms and techniques for representing volumetric mesostructure as texture-mapped detail. Our goal is to provide researchers with an overview of novel contributions to volumetric texture mapping as a starting point for further research and developers with a comparative review of techniques, giving insight into which methods would be fitting for particular tasks.</p>

<p>We start by defining the scope of our domain and provide background information regarding mesostructure and volumetric texture mapping. Existing techniques are assessed in terms of content representation and storage as well as quality and performance of parameterization and rendering. Finally, we provide insights to the field and opportunities for research directions in terms of real-time volumetric texture-mapped surfaces under deformation.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Multi-Hit Ray Traversal]]></title>
    <link>http://jcgt.org/published/0003/01/01</link>
    <guid>http://jcgt.org/published/0003/01/01</guid>
    <category>Paper</category>
    <pubDate>2014-02-07</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0003/01/01/icon.png">]]><![CDATA[<p>Christiaan Gribble, Alexis Naveros,  and Ethan Kerzner, Multi-Hit Ray Traversal, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 4, no. 1, 1-17, 2014</p>]]><![CDATA[<p><i>Multi-hit</i> ray traversal is a class of ray traversal algorithms that finds one or more, and possibly all, primitives intersected by a ray ordered by point of intersection. Multi-hit traversal generalizes traditional <i>first-hit</i> ray traversal and is useful in computer graphics and physics-based simulation. We introduce an efficient algorithm for ordered multi-hit ray traversal, investigate its performance in a GPU ray tracer, and demonstrate two problems easily solved with our algorithm.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Practical Illumination from Flames]]></title>
    <link>http://jcgt.org/published/0002/02/10</link>
    <guid>http://jcgt.org/published/0002/02/10</guid>
    <category>Paper</category>
    <pubDate>2013-12-31</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/10/icon.png">]]><![CDATA[<p>Ryusuke Villemin and Christophe Hery, Practical Illumination from Flames, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 142-155, 2013</p>]]><![CDATA[<p>We present a method for creating and sampling volumetric light sources directly using the raw volumetric data under a Monte Carlo framework. Our algorithm does not require any preprocessing or baking, so it is compatible with progressive rendering. Since it does not use imposters, we can achieve high quality results with any mirror, glossy, or diffuse object. To be able to efficiently sample the volume emission, we use two strategies: light sampling (for the volume) and BRDF sampling (for the receiving object), and express them in the same domain so that we can apply multiple importance sampling (MIS) to compute the final result. Our technique focuses on the emission portion of volumetric rendering and is complementary to all other recent publications handling scattering effects.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Weighted Blended Order-Independent Transparency]]></title>
    <link>http://jcgt.org/published/0002/02/09</link>
    <guid>http://jcgt.org/published/0002/02/09</guid>
    <category>Paper</category>
    <pubDate>2013-12-19</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/09/icon.png">]]><![CDATA[<p>Morgan McGuire and Louis Bavoil, Weighted Blended Order-Independent Transparency, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 122-141, 2013</p>]]><![CDATA[<p>Many rendering phenomena can be modeled with partial coverage. These include flames, smoke, hair, clouds, properly-filtered silhouettes, non-refractive glass, and special effects such as forcefields and magic. A challenge in rendering these is that the value of pixel partly covered by multiple surfaces depends on the depth order of the surfaces. One approach to avoid the cost of storing and sorting primitives or fragments is to alter the compositing operator so that it is order independent, thus allowing a pure streaming approach.</p>

<p>We describe two previous methods for implementing blended order-independent transparency, and then introduce two new methods derived from them. Both new methods guarantee correct coverage of background and strictly improve color representation over the previous methods. Because these require only classic OpenGL-style blending and bounded memory, they may be preferred to A-buffer like methods for mobile devices, consoles, and other constrained rendering environments. They are attractive for all platforms for models such as particle systems and hair, where discrete changes in surface ordering that will be perceived as popping are undesirable and a soft transition between surfaces is preferred.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Shader-Based Antialiased, Dashed, Stroked Polylines]]></title>
    <link>http://jcgt.org/published/0002/02/08</link>
    <guid>http://jcgt.org/published/0002/02/08</guid>
    <category>Paper</category>
    <pubDate>2013-12-06</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/08/icon.png">]]><![CDATA[<p>Nicolas P. Rougier, Shader-Based Antialiased, Dashed, Stroked Polylines, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 105-121, 2013</p>]]><![CDATA[<p>Dashed stroked paths are a widely-used feature found in the vast majority of vector-drawing software and libraries. They enable, for example, the highlighting of a given path, such as the current selection, in drawing software or distinguishing curves, in the case of a scientific plotting package. This paper introduces a shader-based method for rendering arbitrary dash patterns along any continuous polyline (smooth or broken). The proposed method does not tessellate individual dash patterns and allows for fast and nearly accurate rendering of any user-defined dash pattern and caps. Benchmarks indicate a slowdown ratio between 1.1 and 2.1 with an increased memory consumption between 3 and 6. Furthermore, the method can be used for solid thick polylines with correct caps and joins with only a slowdown factor of 1.1.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Avoiding Texture Seams by Discarding Filter Taps]]></title>
    <link>http://jcgt.org/published/0002/02/07</link>
    <guid>http://jcgt.org/published/0002/02/07</guid>
    <category>Paper</category>
    <pubDate>2013-12-06</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/07/icon.png">]]><![CDATA[<p>Robert Toth, Avoiding Texture Seams by Discarding Filter Taps, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 91-104, 2013</p>]]><![CDATA[<p>Mapping textures to complex objects is a non-trivial task. It is often desirable or even necessary to map separate textures to different parts of an object, but it may be difficult to obtain high-quality texture filtering across the seams where textures meet. Existing real-time methods either require significant amounts of memory, prohibit use of wide texture filters, or have a high complexity. In this paper, we present a new method for sampling textures which is surprisingly simple, does not require padding, and results in high image quality. The method discards filter taps that extend beyond the texture boundary and relies on multisample antialiasing in order to produce good image quality. Our method is suitable for real-time implementations of rectangular-chart-based assets, such as per-patch texturing (e.g. Ptex).</p>]]></description>
  </item>
  <item>
    <title><![CDATA[ISSN Number Issued ]]></title>
    <link>http://jcgt.org/news/2013-11-17-issn.html</link>
    <guid>http://jcgt.org/news/2013-11-17-issn.html</guid>
    <category>News</category>
    <pubDate>2013-11-17</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                Managing Editor Alice Peters arranged for JCGT's International
                Standard Serial Number (ISSN) 2331-7418 through the ISSN
                Center at the Library of Congress. This is now listed in 
                JCGT article bibtex files.
                </p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[JCGT-I3D 2014 Partnership]]></title>
    <link>http://jcgt.org/news/2013-09-26-I3D.html</link>
    <guid>http://jcgt.org/news/2013-09-26-I3D.html</guid>
    <category>News</category>
    <pubDate>2013-09-26</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                I'm pleased to announce a special partnership between JCGT and
                the <a href="http://i3dsymposium.org">ACM Symposium on
                    Interactive 3D Graphics and Games (I3D) 2014 conference</a>.
                This year, I3D will include presentation of JCGT papers in the
                program.  Furthermore, selected I3D papers will be invited to
                follow up with extended implementation details in JCGT.  This
                brings together the advantages the considered, detailed
                exploration of a topic in a journal and the public
                presentation and debate of a conference.  JCGT and I3D are a
                particularly good fit for one another, with overlapping
                communities and a shared focus on practical, robust methods
                with collaboration between academia and industry.
                </p>
                <p>
                <b>JCGT Papers Presented at I3D</b>:
                The authors of any paper submitted to JCGT before 22 October 2013 
                who want to present at I3D 2014 should e-mail
                <a href="mailto:managing-editor@jcgt.org">managing-editor@jcgt.org</a>
                before 4 December 2013.  In the event that there are more
                authors who wish to take advantage of this opportunity than
                the schedule can accomodate, we will prioritize speaking slots
                in the order that e-mails were received.  Authors whose papers
                are under review may request a slot conditional on their paper
                being accepted.  Speakers must register for and attend the
                conference as would authors presenting papers in the regular
                I3D program.
                </p>
                <p>
                <b>I3D Paper Followup in JCGT</b>: Every paper accepted at I3D will
                automatically be considered by the I3D committee and JCGT
                board for subsequent followup in the journal.  Authors of
                papers that are recommended for followup in JCGT will have the
                opportunity to revise at least 30% of their paper to address
                the implementation details, limitations, and robustness
                evaluation emphasized by the journal.  At least two of the
                anonymous I3D reviewers from the original paper will also
                review for JCGT per paper, providing continuity and
                accelerating the publication process.  The papers selected for
                this process will be announced at the I3D closing ceremony.
                </p>
                <p style="padding-left:280px">
                <i>—Morgan&nbsp;McGuire</i>
                </p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[Efficient Spherical Harmonic Evaluation]]></title>
    <link>http://jcgt.org/published/0002/02/06</link>
    <guid>http://jcgt.org/published/0002/02/06</guid>
    <category>Paper</category>
    <pubDate>2013-09-08</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/06/icon.png">]]><![CDATA[<p>Peter-Pike Sloan, Efficient Spherical Harmonic Evaluation, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 84-90, 2013</p>]]><![CDATA[<p>The real spherical harmonics have been used extensively in computer graphics, but the conventional representation is in terms of spherical coordinates and involves expensive trigonometric functions. While the polynomial form is often listed for low orders, directly evaluating the basis functions independently is inefficient. This paper will describe in detail how recurrence relations can be used to generate pre-factored evaluation code that is smaller, more efficient, and presents a performance comparison of several alternative techniques to evaluate the spherical harmonics.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[2D Polyhedral Bounds of a Clipped, Perspective-Projected 3D Sphere]]></title>
    <link>http://jcgt.org/published/0002/02/05</link>
    <guid>http://jcgt.org/published/0002/02/05</guid>
    <category>Paper</category>
    <pubDate>2013-08-22</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/05/icon.png">]]><![CDATA[<p>Michael Mara and Morgan McGuire, 2D Polyhedral Bounds of a Clipped, Perspective-Projected 3D Sphere, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 70-83, 2013</p>]]><![CDATA[<p>We show how to efficiently compute 2D polyhedral bounds of the (elliptic) perspective projection of a 3D sphere that has been clipped to the near plane. For the special case of a 2D axis-aligned bounding box, the algorithm is especially efficient.</p>

<p>This has applications for bounding the screen-space effect of an emitter under deferred shading, bounding the kernel in density estimation during image space photon mapping, and bounding the pixel extent of objects for ray casting.</p>

<p>Our solution is designed to elegantly handle the case where the sphere crosses the near clipping plane and efficiently handle all cases on vector processors. In addition to the algorithm, we provide implementations of two common applications: light-tile classification in C++ and expanding an attribute array of spheres (encoded as points and radii) into polygons that cover their silhouettes as a GLSL geometry shader.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[The Visibility Buffer: A Cache-Friendly Approach to Deferred Shading]]></title>
    <link>http://jcgt.org/published/0002/02/04</link>
    <guid>http://jcgt.org/published/0002/02/04</guid>
    <category>Paper</category>
    <pubDate>2013-08-12</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/04/icon.png">]]><![CDATA[<p>Christopher A. Burns and Warren A. Hunt, The Visibility Buffer: A Cache-Friendly Approach to Deferred Shading, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 55-69, 2013</p>]]><![CDATA[<p>Forward rendering pipelines shade fragments in triangle-submission order. Consequently, non-visible fragments are often wastefully shaded before being subsequently occluded--a phenomenon known as over-shading. A popular way to avoid over-shading is to only compute surface attributes for each fragment during a forward pass and store them in a buffer. Lighting is performed in a subsequent pass, consuming the attributes buffer. This strategy is commonly known as deferred shading.</p>

<p>We identify two notable deficits. First, the buffer for the geometric surface attributes (g-buffer) is large, often 20+ bytes per visibility sample. The bandwidth required to read and write large g-buffers can be prohibitive on mobile or integrated GPUs. Second, the separation of shading work and visibility is incomplete. Surface attributes are eagerly computed in the forward pass on occluded fragments, wasting texture bandwidth and compute resources.</p>

<p>To address these problems, we propose to replace the g-buffer with a simple visibility buffer that only stores a triangle index and instance ID per sample, encoded in as few as four bytes. This significantly reduces storage and bandwidth requirements. Generating a visibility buffer is cheaper than generating a g-buffer and does not require texture reads or any surface-material-specific computation. The deferred shading pass accesses triangle data with this index to compute barycentric coordinates and interpolated vertex data. By minimizing the memory footprint of the visibility solution, we reduce the working set of the deferred rendering pipeline. This results in improved performance on bandwidth-limited GPU platforms, especially for high-resolution workloads.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Robust BVH Ray Traversal]]></title>
    <link>http://jcgt.org/published/0002/02/02</link>
    <guid>http://jcgt.org/published/0002/02/02</guid>
    <category>Paper</category>
    <pubDate>2013-07-19 (corrected 2015-07-03)</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/02/icon.png">]]><![CDATA[<p>Thiago Ize, Robust BVH Ray Traversal, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 12-27, 2013</p>]]><![CDATA[<p>Most axis-aligned bounding-box (AABB) based BVH-construction algorithms are numerically robust; however, BVH ray traversal algorithms for ray tracing are still susceptible to numerical precision errors. We show where these errors come from and how they can be efficiently avoided during traversal of BVHs that use AABBs.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Harnessing the GPU for Real-Time Haptic Tissue Simulation]]></title>
    <link>http://jcgt.org/published/0002/02/03</link>
    <guid>http://jcgt.org/published/0002/02/03</guid>
    <category>Paper</category>
    <pubDate>2013-07-19</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/03/icon.png">]]><![CDATA[<p>C. E. Etheredge, E. E. Kunst,  and A. J. B. Sanders, Harnessing the GPU for Real-Time Haptic Tissue Simulation, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 28-54, 2013</p>]]><![CDATA[<p>Virtual surgery simulators are emerging as a training method for medical specialists and are expected to provide a virtual environment that is realistic and responsive enough to be able to physically simulate a wide variety of medical scenarios. Haptic interaction with the environment requires an underlying physical model that is dynamic, deformable, and computable in real-time at very high frame rates.</p>

<p>By harnessing the GPU, we are able to simulate an environment with soft volumetric tissue that supports real-time deformation and two-way haptic interaction. In particular, we present a parallel algorithm that uses a volumetric mass-spring model to simulate this environment, implemented using NVIDIA CUDA. Our algorithm is implemented and used as an integral part of <i>Virtual Competence Training Area</i> (VICTAR), an extendable virtual surgery simulation software framework by Vrest Medical. We show that our method is capable of simulating a model with over 100 K masses at 1000 Hz on NVIDIA Tesla C2050. We also discuss the scalability and potential future applications of the algorithm.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Simple Analytic Approximations<br/>to the CIE XYZ Color Matching Functions]]></title>
    <link>http://jcgt.org/published/0002/02/01</link>
    <guid>http://jcgt.org/published/0002/02/01</guid>
    <category>Paper</category>
    <pubDate>2013-07-12</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/02/01/icon.png">]]><![CDATA[<p>Chris Wyman, Peter-Pike Sloan,  and Peter Shirley, Simple Analytic Approximations<br/>to the CIE XYZ Color Matching Functions, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 2, 1-11, 2013</p>]]><![CDATA[<p>We provide three analytical fits to the CIE <i>x&#773;</i>, <i>y&#773;</i>, and <i>z&#773;</i> color matching curves commonly used in predictive and spectral renderers as an intermediate between light spectra and RGB colors. Any of these fits can replace the standard tabulated CIE curves. Using tabulated curves can introduce typos, encourage crude simplifying approximations, or add opportunities to download curves from sources featuring inconsistent or incorrect data. Our analytic fits are simple to implement and verify. While fitting introduces error, our fits introduce less than the variance between the human-subject data aggregated into the CIE standard. Additionally, common rendering approximations, such as coarse spectral binning, introduce significantly more error. We provide simple, analytic fits in Equations 2 and 3, but even our more accurate fit in Equation 4 only requires ten lines of code.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Watertight Ray/Triangle Intersection]]></title>
    <link>http://jcgt.org/published/0002/01/05</link>
    <guid>http://jcgt.org/published/0002/01/05</guid>
    <category>Paper</category>
    <pubDate>2013-06-28</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/01/05/icon.png">]]><![CDATA[<p>Sven Woop, Carsten Benthin,  and Ingo Wald, Watertight Ray/Triangle Intersection, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 1, 65-82, 2013</p>]]><![CDATA[<p>We propose a novel algorithm for ray/triangle intersection tests that, unlike most other such algorithms, is watertight at both edges <i>and</i> vertices for adjoining triangles, while also maintaining the same performance as simpler algorithms that are not watertight. Our algorithm is straightforward to implement, and is, in particular, robust for all triangle configurations including extreme cases, such as small and needle-like triangles. We achieve this robustness by transforming the intersection problem using a translation, shear, and scale transformation into a coordinate space where the ray starts at the origin and is directed, with unit length, along one coordinate axis. This simplifies the remaining intersection problem to a 2D problem where edge tests can be done conservatively using single-precision floating-point arithmetic. Using our algorithm, numerically challenging cases, where single precision is insufficient, can be detected with almost no overhead and can be accurately handled through a (rare) fallback to double precision. Because our algorithm is conservative but not exact, we must dynamically enlarge bounds during BVH traversal.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Higher Quality 2D Text Rendering]]></title>
    <link>http://jcgt.org/published/0002/01/04</link>
    <guid>http://jcgt.org/published/0002/01/04</guid>
    <category>Paper</category>
    <pubDate>2013-04-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/01/04/icon.png">]]><![CDATA[<p>Nicolas P. Rougier, Higher Quality 2D Text Rendering, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 1, 50-64, 2013</p>]]><![CDATA[<p>Even though text is pervasive in most 3D applications, there is surprisingly no native support for text rendering in OpenGL. To cope with this absence, Mark Kilgard introduced the use of texture fonts [Kilgard 1997]. This technique is well known and widely used and ensures both good performances and a decent quality in most situations. However, the quality may degrade strongly in orthographic mode (screen space) due to pixelation effects at large sizes and to legibility problems at small sizes due to incorrect hinting and positioning of glyphs. In this paper, we consider font-texture rendering to develop methods to ensure the highest quality in orthographic mode. The method used allows for both the accurate rendering and positioning of any glyph on the screen. While the method is compatible with complex shaping and/or layout (e.g., the Arabic alphabet), these specific cases are not studied in this article</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Dynamic Stackless Binary Tree Traversal]]></title>
    <link>http://jcgt.org/published/0002/01/03</link>
    <guid>http://jcgt.org/published/0002/01/03</guid>
    <category>Paper</category>
    <pubDate>2013-03-18</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/01/03/icon.png">]]><![CDATA[<p>Rasmus Barringer and Tomas Akenine-M&ouml;ller, Dynamic Stackless Binary Tree Traversal, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 1, 38-49, 2013</p>]]><![CDATA[<p>A fundamental part of many computer algorithms involves traversing a binary tree.  One notable example is traversing a space-partitioning acceleration structure when computing ray-traced images.  Traditionally, the traversal requires a stack to be temporarily stored for each ray, which results in both additional storage and memory-bandwidth usage.  We present a novel algorithm for traversing a binary tree that does not require a stack and, unlike previous approaches, works with dynamic descent direction without restarting.  Our algorithm will visit exactly the same sequence of nodes as a stack-based counterpart with extremely low computational overhead.  No additional memory accesses are made for implicit binary trees. For sparse trees, parent links are used to backtrack the shortest path.  We evaluate our algorithm using a ray tracer with a bounding volume hierarchy for which source code is supplied.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Hybrid Computational Voxelization Using the Graphics Pipeline]]></title>
    <link>http://jcgt.org/published/0002/01/02</link>
    <guid>http://jcgt.org/published/0002/01/02</guid>
    <category>Paper</category>
    <pubDate>2013-03-18</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/01/02/icon.png">]]><![CDATA[<p>Randall Rauwendaal and Mike Bailey, Hybrid Computational Voxelization Using the Graphics Pipeline, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 1, 15-37, 2013</p>]]><![CDATA[<p>This paper presents an efficient computational voxelization approach that utilizes the graphics pipeline. Our approach is hybrid in that it performs a precise gap-free computational voxelization, employs fixed-function components of the GPU, and utilizes the stages of the graphics pipeline to improve parallelism. This approach makes use of the latest features of OpenGL and fully supports both conservative and thin-surface voxelization. In contrast to other computational voxelization approaches, our approach is implemented entirely in OpenGL and achieves both triangle and fragment parallelism through its use of geometry and fragment shaders. By exploiting features of the existing graphics pipeline, we are able to rapidly compute accurate scene voxelizations in a manner that integrates well with existing OpenGL applications, is robust across many different models</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Adaptive Supersampling for Deferred Anti-Aliasing]]></title>
    <link>http://jcgt.org/published/0002/01/01</link>
    <guid>http://jcgt.org/published/0002/01/01</guid>
    <category>Paper</category>
    <pubDate>2013-03-02</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0002/01/01/icon.png">]]><![CDATA[<p>Matthias Holl&auml;nder, Tamy Boubekeur,  and Elmar Eisemann, Adaptive Supersampling for Deferred Anti-Aliasing, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 3, no. 1, 1-14, 2013</p>]]><![CDATA[<p>We present a novel approach to perform anti-aliasing in a deferred-rendering context. Our approach is based on supersampling; the scene is rasterized into an enlarged geometry buffer, i.e., each pixel of the final image corresponds to a window of attributes within this buffer. For the final image, we sample this window adaptively based on different metrics accounting for geometry and appearance to derive the pixel shading. Further, we use anisotropic filtering to avoid texturing artifacts. Our approach concentrates the workload where needed and allows faster shading in various supersampling scenarios, especially when the shading cost per pixel is high.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Opening Volume 2]]></title>
    <link>http://jcgt.org/news/2013-01-18-vol2.html</link>
    <guid>http://jcgt.org/news/2013-01-18-vol2.html</guid>
    <category>News</category>
    <pubDate>2013-01-18</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                The inaugural Volume 1, Issue 1 of <i>JCGT</i> contains five
                articles written and accepted in 2012 that total slightly over 100 pages.
                These papers were collectively <b>downloaded more than 40,000
                    times</b> in the past six months.  These are strong and
                diverse papers that span the intended breadth of <i>JCGT</i>,
                including novel research, long-form systems, and position
                papers that are all of immediate practical value in computer
                graphics.  I'm currently adjusting the website to post the
                last two of these papers and appropriately archive the volume
                now that it is closed.  Note that these papers are all
                Creative Commons licensed so that they may be distributed
                freely. 
                </p>
                <p>
                The average time from submission until a first decision for
                the first volume was <b>four weeks</b>, with an average of 25
                days from that point to publication for recommended papers.
                These are exceptionally fast turnaround times for thorough
                reviewing and editing.  They are made possible by our
                dedicated editorial board listed on the masthead on the right
                and by our anonymous reviewers.  I anticipate a slight
                increase in those durations for January and February 2013 due
                to holidays and the SIGGRAPH papers deadline and paper
                reviewing.  The editing and reviewing at <i>JCGT</i> is the
                best that I've seen in the field for contemporary publication
                venues.
                </p>
                <p>
                <b>The journal is now accepting papers for Volume 2</b>. This
                volume will be numbered in two issues, although papers
                will continue to be published online immediately upon
                acceptance.  We currently have nine papers in the pipeline for
                this volume.  There is no minimum or maximum size for an
                issue. Our goal is to publish five papers per issue.  I invite
                authors to discuss paper topics with me even before submitting
                their work.  The primary advice that I offer all authors is to
                fully and candidly present information in the form that they
                themselves would seek as readers.  That often includes
                <b>discussion of failure cases and integration issues, test data,
                    and source code</b>.
                </p>
                <p>
                I updated
                the <a href="/files/jcgt-template.zip">style
                    file</a> for Volume 2. The new format allows more text per
                page while retaining the single-column layout and relatively
                wide margins intended to enhance readability on tablets.  The
                new template also adds explicit permission for re-use of the
                abstract and teaser (with attribution) for the purpose of
                summarizing or promoting the paper.  We're working to expand
                our archival process and website, register with indexing
                services, and promote the journal in a variety of ways.  The
                importance of these initiatives will scale with the number of
                papers in the journal. Our first priority remains publishing the
                timely and high quality articles that you find on this website.
                </p>
                <p style="padding-left:280px">
                <i>—Morgan&nbsp;McGuire</i>
                </p>
            ]]></description>
  </item>
  <item>
    <title><![CDATA[A Compressed Depth Cache]]></title>
    <link>http://jcgt.org/published/0001/01/05</link>
    <guid>http://jcgt.org/published/0001/01/05</guid>
    <category>Paper</category>
    <pubDate>2012-12-31</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0001/01/05/icon.png">]]><![CDATA[<p>Jon Hasselgren, Magnus Andersson, Jim Nilsson,  and Tomas Akenine-M&ouml;ller, A Compressed Depth Cache, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 2, no. 1, 101-118, 2012</p>]]><![CDATA[<p>We propose a depth cache that keeps the depth data in compressed format, when possible. Compared to previous work, this requires a more flexible cache implementation, where a tile may occupy a variable number of cache lines depending on whether it can be compressed or not. The advantage of this is that the effective cache size increases proportionally to the compression ratio. We show that the depth-buffer bandwidth can be reduced, on average, by 17%, compared to a system compressing the data after the cache. Alternatively, and perhaps more interestingly, we show that pre-cache compression in all cases increases the effective cache size by a factor of two or more, compared to a post-cache compressor, at equal or higher performance.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[The <i>Deco</i> Framework for Interactive Procedural Modeling]]></title>
    <link>http://jcgt.org/published/0001/01/04</link>
    <guid>http://jcgt.org/published/0001/01/04</guid>
    <category>Paper</category>
    <pubDate>2012-12-28</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0001/01/04/icon.png">]]><![CDATA[<p>Radom&iacute;r M&#277;ch and Gavin Miller, The <i>Deco</i> Framework for Interactive Procedural Modeling, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 2, no. 1, 43-99, 2012</p>]]><![CDATA[<p>In this paper we introduce <i>Deco</i>, a powerful framework for procedural design that shipped in Adobe Flash Pro CS4 through CS6 and in Adobe Photoshop CS6. The Deco framework generates complex patterns from a small number of input parameters and models encoded as JavaScript objects, all stored in text files called <i>scriptals</i>. The object's methods define local growth and behavior or the rendering of the pattern. A collection of libraries simulate global interactions between the resulting structure and the environment. In addition, an artist can interactively control both the procedural growth and the resulting pattern or structure.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Performance per What?]]></title>
    <link>http://jcgt.org/published/0001/01/03</link>
    <guid>http://jcgt.org/published/0001/01/03</guid>
    <category>Paper</category>
    <pubDate>2012-10-18</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0001/01/03/icon.png">]]><![CDATA[<p>Tomas Akenine-M&ouml;ller and Bj&ouml;rn Johnsson, Performance per What?, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 2, no. 1, 37-41, 2012</p>]]><![CDATA[<p>In this short note, we argue that <i>performance per watt</i>, which is often cited in the graphics hardware industry, is <i>not</i> a particularly useful unit for power efficiency in scientific and engineering discussions. We argue that joules per task and watts are more reasonable units. We show a concrete example where nanojoules per pixel is much more intuitive, easier to compute aggregate statistics from, and easier to reason about.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[The Compact YCoCg Frame Buffer]]></title>
    <link>http://jcgt.org/published/0001/01/02</link>
    <guid>http://jcgt.org/published/0001/01/02</guid>
    <category>Paper</category>
    <pubDate>2012-09-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0001/01/02/icon.png">]]><![CDATA[<p>Pavlos Mavridis and Georgios Papaioannou, The Compact YCoCg Frame Buffer, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 2, no. 1, 19-35, 2012</p>]]><![CDATA[<p>In this article we present a lossy frame-buffer compression format, suitable for existing commodity GPUs and APIs. Our compression scheme allows a full-color image to be directly rasterized using only two color channels at each pixel, instead of three, thus reducing both the consumed storage space and bandwidth during the rendering process. Exploiting the fact that the human visual system is more sensitive to fine spatial variations of luminance than of chrominance, the rasterizer generates fragments in the YCoCg color space and directly stores the chrominance channels at a lower resolution using a mosaic pattern. When reading from the buffer, a simple and efficient edge-directed reconstruction filter provides a very precise estimation of the original uncompressed values. We demonstrate that the quality loss from our method is negligible, while the bandwidth reduction results in a sizable increase in the fill rate of the GPU rasterizer.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Importance Sampling of Reflection from Hair Fibers]]></title>
    <link>http://jcgt.org/published/0001/01/01</link>
    <guid>http://jcgt.org/published/0001/01/01</guid>
    <category>Paper</category>
    <pubDate>2012-06-22</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/published/0001/01/01/icon.png">]]><![CDATA[<p>Christophe Hery and Ravi Ramamoorthi, Importance Sampling of Reflection from Hair Fibers, <i>Journal of Computer Graphics Techniques (JCGT)</i>, vol. 2, no. 1, 1-17, 2012</p>]]><![CDATA[<p>Hair and fur are increasingly important visual features in production rendering, and physically-based light scattering models are now commonly used. In this paper, we enable efficient Monte Carlo rendering of specular reflections from hair fibers. We describe a simple and practical importance sampling strategy for the reflection term in the Marschner hair model. Our implementation enforces approximate energy conservation, including at grazing angles by modifying the samples appropriately, and includes a Box-Muller transform to effectively sample a Gaussian lobe. These ideas are simple to implement, but have not been commonly reported in standard references. Moreover, we have found them to have broader applicability in sampling surface specular BRDFs. Our method has been widely used in production for more than a year, and complete pseudocode is provided.</p>]]></description>
  </item>
  <item>
    <title><![CDATA[Announcement]]></title>
    <link>http://jcgt.org/news/2012-05-30-announce.html</link>
    <guid>http://jcgt.org/news/2012-05-30-announce.html</guid>
    <category>News</category>
    <pubDate>2012-05-30</pubDate>
    <description><![CDATA[<img src="http://jcgt.org/teapot.png">]]><![CDATA[
                <p>
                Dear Colleague,
                </p>
                <p>
                We are proud to announce a new computer graphics journal:
                "<b><i>The Journal of Computer Graphics Techniques</i></b>."  <i>JCGT</i>
                embraces the changing world of scientific publishing, which is
                moving away from for-profit print journals and towards free,
                online, digital publications.  <i>JCGT</i> is a high-quality,
                peer-reviewed, and free journal for techniques: short papers
                of clear practical or theoretical value. We are dedicated to
                advancing the field of computer graphics through the
                publication of techniques in all areas of computer graphics,
                including software, hardware, games, and interaction.
                </p>
                <p>
                <i>JCGT</i> is the next step in a respected tradition.  When the best
                way to disseminate new, self-contained graphics techniques was
                in book form, the <i>Graphics Gems Series</i> was born.  When we
                wanted to add peer review and a more timely publication
                schedule, many of the former <i>Graphics Gems</i> editors banded
                together to form the <i>Journal of Graphics Tools</i>.  Today, we can
                publish even more promptly and eliminate all subscription and
                author's fees by moving to online digital publication.
                Thus, the <i>JGT</i> founding editor, advisory board, and much of the
                2012 editorial board have resigned from <i>JGT</i>, and together we
                have created the free and non-profit <i>Journal of Computer
                    Graphics Techniques</i>.
                </p>
                <p>
                <i>JCGT</i>'s editorial board will continue our commitment to
                rigorous peer review in a tight review cycle with high-quality
                editorial feedback.  We will use modern online tools to
                produce and host our articles.  Thanks to the nature of online
                publishing, papers can be published immediately after their
                review cycle concludes and they are approved by the
                Editor-in-Chief.  Because it's online, <i>JCGT</i> can host
                supplementary material such as source code and images, and
                link them to the original article.  In keeping with
                volunteer-driven, non-profit Open Access models, <i>JCGT</i>
                will use a minimal, non-exclusive copyright and publish all
                source code as Open Source.
                </p>
                <p>
                We are all committed to <i>JCGT</i>'s success as a
                high-quality, peer-reviewed publication, and we invite your
                submissions now
                at <a href="http://jcgt.org">http://jcgt.org</a>.
                </p>
            ]]></description>
  </item>
</channel>
</rss>
