Skip to content
1981
1-2: Expanded Visualities: Photography and Emerging Technologies
  • ISSN: 2040-3682
  • E-ISSN: 2040-3690

Abstract

This article looks at the emerging technology of the neural radiance field (NeRF) and suggests that this means of digital image production, when used by a creative practitioner, produces an emergent aesthetic – a result of the affordances inherent in the digital materiality of the NeRF and its processes. The use of this machine-learning technology as a material for creating images can lead to an entirely new way of creating photographic representations that provide us with a way of seeing and recording how we phenomenologically experience seeing. The processes and the emergent aesthetic are explored with examples drawn from the author’s own practice.

Loading

Article metrics loading...

/content/journals/10.1386/pop_00093_1
2024-06-28
2025-04-20
Loading full text...

Full text loading...

References

  1. Berger, John ([1972] 1997), Ways of Seeing: Based on the BBC Television Series with John Berger – A Book Made, London: BBC and Penguin Books.
    [Google Scholar]
  2. Chen, Zhiqin and Zhang, Hao (2019), ‘Learning implicit fields for generative shape modeling’, Cornell University, 16 September, http://arxiv.org/abs/1812.02822. Accessed 16 September 2019.
    [Google Scholar]
  3. Clarke, A. C. (1982), Profiles of the Future: An Inquiry into the Limits of the Possible, 2nd rev. ed., London: Gollancz.
    [Google Scholar]
  4. Clemens, Justin and Nash, Adam (2010), ‘Seven theses on the concept of “post-convergence”’, Academia, https://www.academia.edu/27057333/Seven_theses_on_the_concept_of_post_convergence. Accessed 10 April 2023.
    [Google Scholar]
  5. Crawford, Kate (2021), The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence, New Haven, CT: Yale University Press.
    [Google Scholar]
  6. Dellaert, Frank (2022a), ‘NeRF at CVPR 2022’, Frank Dellaert, 21 June, https://dellaert.github.io/NeRF22/. Accessed 10 April 2023.
    [Google Scholar]
  7. Dellaert, Frank (2022b), ‘Technical perspective: Neural radiance fields explode on the scene’, Communications of the ACM, 65:1, p. 98.
    [Google Scholar]
  8. Drucker, Johanna (2021), The Digital Humanities Coursebook: An Introduction to Digital Methods for Research and Scholarship, Abingdon: Routledge.
    [Google Scholar]
  9. Harwood, Graham (2008), ‘Pixel’, in M. Fuller (ed.), Software Studies: A Lexicon, Cambridge, MA: MIT Press, pp. 21317.
    [Google Scholar]
  10. Hunger, Francis (2023), ‘Unhype artificial “intelligence!”: A proposal to replace the deceiving terminology of AI’, Zenodo, 12 April, https://bibliotheek.ehb.be:2102/10.5281/zenodo.7524493. Accessed 12 April 2023.
    [Google Scholar]
  11. Inwood, Michael (2015), ‘The use and abuse of vision’, in A. Cimino and P. Kontos (eds), Phenomenology and the Metaphysics of Sight, vol. 13, Leiden: Brill, pp. 16583.
    [Google Scholar]
  12. Merleau-Ponty, Maurice and Landes, Donald A. (2012), Phenomenology of Perception, Abingdon and New York: Routledge.
    [Google Scholar]
  13. Mildenhall, Ben, Srinivasan, Pratul P., Tancik, Matthew, Barron, Jonathan T., Ramamoorthi, Ravi and Ng, Ren (2020), ‘NeRF: Representing scenes as neural radiance fields for view synthesis’, Communications of the ACM, 65:1, pp. 99106.
    [Google Scholar]
  14. Müller, Thomas, Evans, Alex, Schied, Christoph and Keller, Alexander (2022), ‘Instant neural graphics primitives with a multiresolution hash encoding’, ACM Transactions on Graphics, 41:4, pp. 115.
    [Google Scholar]
  15. NVlabs (2022), ‘Tips for training NeRF models with instant neural graphics primitives’, NVlabs, https://github.com/NVlabs/instant-ngp/blob/e45134b9bcf50d0c04f27bc3ab3cde57c27f5bc8/docs/nerf_dataset_tips.md. Accessed 17 April 2023.
    [Google Scholar]
  16. Palmer, Daniel and Sluis, Katrina (2023), ‘Photography after AI’, Artlink, 43:2, pp. 1827.
    [Google Scholar]
  17. Rudin, Cynthia (2019), ‘Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead’, Nature Machine Intelligence, 1:5, pp. 20615.
    [Google Scholar]
  18. Shiffman, Daniel (2017), ‘3.4: Linear regression with gradient descent: Intelligence and learning’, YouTube, 31 May, https://www.youtube.com/watch?v=L-Lsfu4ab74. Accessed 10 April 2017.
    [Google Scholar]
  19. Sokolowski, Robert (2000), Introduction to Phenomenology, Cambridge and New York: Cambridge University Press.
    [Google Scholar]
  20. Somaini, Antonio (2022), ‘On the photographic status of images produced by generative adversarial networks (GANs)’, Philosophy of Photography, 13:1, pp. 15364.
    [Google Scholar]
  21. Steyerl, Hito (2023), ‘Mean images’, New Left Review, 140&141, pp. 8297.
    [Google Scholar]
  22. Tancik, Matthew, Weber, Ethan, Ng, Evonne, Li, Ruilong, Yi, Brent, Kerr, Justin, Wang, Terrance, Kristoffersen, Alexander, Austin, Jake, Salahi, Kamyar, Ahuja, Abhik, McAllister, David and Kanazawa, Angjoo (2023), ‘Nerfstudio: A modular framework for neural radiance field development’, Cornell University, 17 October, http://arxiv.org/abs/2302.04264. Accessed 8 February 2023.
    [Google Scholar]
  23. Yiu, Sheung (2022), ‘Excerpts from Everything Is a Projection (2020–present): Digital photography and 3D photogrammetry’, Philosophy of Photography, 12:1, pp. 14960.
    [Google Scholar]
/content/journals/10.1386/pop_00093_1
Loading
/content/journals/10.1386/pop_00093_1
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was a success
Invalid data
An error occurred
Approval was partially successful, following selected items could not be processed due to error
Please enter a valid_number test