A generative deep learning for exploring layout variation on visual poster design

(1) * Peter Ardhianto Mail (Visual Communication Design, Soegijapranata Catholic University, Semarang, Indonesia)
(2) Yonathan Purbo Santosa Mail (Department of Informatics Engineering, Soegijapranata Catholic University, Semarang, Indonesia)
(3) Yori Pusparani Mail (Department of Digital Media Design, Asia University, Taichung, Taiwan, Province of China)
*corresponding author

Abstract


Layout variation is an essential concept in design and allows designers to create a sense of depth and complexity in their work. However, manually creating layout variations can be time-consuming and limit a designer's creativity. The use of generative art as a tool for creating visual poster designs that emphasize layout variety is explored in this study. Deep learning through generative art offers a solution by using an algorithm to generate layout variations automatically. This paper uses the VQGAN and CLIP approach to describe a generative art system, which renders images via a text prompt and produces a series of variations based on the zoom parameter 0.95 and shifts the y-axis 5 pixels. Our experiment shows that one frame can be generated roughly in 10.108±0.226 seconds, significantly faster than the conventional method for creating layouts on poster design. The model achieved a good quality image, scoring 4.248 using an inception score evaluation. The layout variations can be used as a basis for poster design visuals, allowing designers to explore different visual representations of layouts. This paper demonstrates the potential of generative art to explore layout variation in visual design, offering designers a new approach to creating dynamic and engaging visual designs.

Keywords


Visual Design; VQGAN; Generative Art; Designer; Dynamic Design

   

DOI

https://doi.org/10.31763/viperarts.v5i1.920
      

Article metrics

10.31763/viperarts.v5i1.920 Abstract views : 862 | PDF views : 247

   

Cite

   

Full Text

Download

References


[1] S. Guo et al., “Vinci: An Intelligent Graphic Design System for Generating Advertising Posters,” in Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 2021, pp. 1–17, doi: 10.1145/3411764.3445117.

[2] M. Guo, D. Huang, and X. Xie, “The Layout Generation Algorithm of Graphic Design Based on Transformer-CVAE,” in 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), 2021, pp. 219–224, doi: 10.1109/CONF-SPML54095.2021.00049.

[3] M. Bakar Findikci, “Pierre Mendell’s Opera Poster Designs from Simile to Metaphor,” J. Int. Soc. Res., vol. 14, no. 82, pp. 93–102, 2021.

[4] P. Ardhianto, “Tinjauan Desain pada Karya Poster; Studi Kasus Seri Poster ‘Marlyn Fish 2009,’” Andharupa J. Desain Komun. Vis. Multimed., vol. 2, no. 01, pp. 15–22, Mar. 2018, doi: 10.33633/andharupa.v2i01.1040.

[5] D. Lopes, J. Correia, and P. Machado, “EvoDesigner: Towards Aiding Creativity in Graphic Design,” in International Conference on Computational Intelligence in Music, Sound, Art and Design (Part of EvoStar), Springer, 2022, pp. 162–178. doi: 10.1007/978-3-031-03789-4_11

[6] C. F. Provvidenza, L. R. Hartman, J. Carmichael, and N. Reed, “Does a picture speak louder than words? The role of infographics as a concussion education strategy,” J. Vis. Commun. Med., vol. 42, no. 3, pp. 102–113, Jul. 2019, doi: 10.1080/17453054.2019.1599683.

[7] G. Slingerland, M. Murray, S. Lukosch, J. McCarthy, and F. Brazier, “Participatory Design Going Digital: Challenges and Opportunities for Distributed Place-Making,” Comput. Support. Coop. Work, vol. 31, no. 4, pp. 669–700, Dec. 2022, doi: 10.1007/s10606-022-09438-3.

[8] C. Redies, M. Grebenkina, M. Mohseni, A. Kaduhm, and C. Dobel, “Global Image Properties Predict Ratings of Affective Pictures,” Frontiers in Psychology, vol. 11. pp. 1–16, 12-May-2020, doi: 10.3389/fpsyg.2020.00953.

[9] C. Dewi, R.-C. Chen, Y.-T. Liu, and H. Yu, “Various Generative Adversarial Networks Model for Synthetic Prohibitory Sign Image Generation,” Appl. Sci., vol. 11, no. 7, pp. 1–15, Mar. 2021, doi: 10.3390/app11072913.

[10] P. Ardhianto et al., “Deep Learning in Left and Right Footprint Image Detection Based on Plantar Pressure,” Appl. Sci., vol. 12, no. 17, pp. 1–13, Sep. 2022, doi: 10.3390/app12178885.

[11] D. Singh, N. Rajcic, S. Colton, and J. McCormack, “Camera Obscurer: Generative Art for Design Inspiration,” in 8th International Conference, EvoMUSART 2019, Held as Part of EvoStar 2019, Leipzig, Germany, 2019, pp. 51–68, doi: 10.1007/978-3-030-16667-0_4.

[12] F. Nassery and P. Sikorski, “New possibilities of using processing and modern methods of the ‘generative art’ graphics in architecture,” Technical Transactions/Czasopismo Techniczne, no. 4. pp. 25–33, 2015.

[13] S. H. Artut, “Futurism art and its significance to computational generative art,” in XXI Generative Art Conference 2018, 2018, pp. 338–345.

[14] D. O. Kim and J. H. Choi, “Analysis of Fashion Design Reflected Visual Properties of the Generative Art,” J. Korean Soc. Cloth. Text., vol. 41, no. 05, pp. 825–839, Oct. 2017, doi: 10.5850/JKSCT.2017.41.5.825.

[15] S. Mayahi and M. Vidrih, “The impact of generative ai on the future of visual content marketing,” arXiv preprint arXiv:2211.12660. pp. 1–15, 2022.

[16] K. Crowson et al., “VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance,” in 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXVII, 2022, pp. 88–105, doi: 10.1007/978-3-031-19836-6_6.

[17] J. Yu et al., “Vector-quantized image modeling with improved vqgan,” in ICLR 2022, 2021, pp. 1–17, doi: 10.48550/arXiv.2110.04627.

[18] P. Li, “Application of CLIP on Advanced GAN of Zero-Shot Learning,” in 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), 2021, pp. 234–238, doi: 10.1109/CONF-SPML54095.2021.00052.

[19] R. Wang, “Collaborated machine learning-based design solution to conceptual design of architecture,” Cornell Universit, 2022.

[20] V. Liu and L. B. Chilton, “Design Guidelines for Prompt Engineering Text-to-Image Generative Models,” in CHI Conference on Human Factors in Computing Systems, 2022, pp. 1–23, doi: 10.1145/3491102.3501825.

[21] J. Praveen Gujjar, H. R. Prasanna Kumar, and N. N. Chiplunkar, “Image classification and prediction using transfer learning in colab notebook,” Glob. Transitions Proc., vol. 2, no. 2, pp. 382–385, Nov. 2021, doi: 10.1016/j.gltp.2021.08.068.

[22] A. Obukhov and M. Krasnyanskiy, “Quality Assessment Method for GAN Based on Modified Metrics Inception Score and Fréchet Inception Distance,” in Proceedings of the Computational Methods in Systems and Software, 2020, pp. 102–114, doi: 10.1007/978-3-030-63322-6_8.

[23] S. P. Porkodi, V. Sarada, V. Maik, and K. Gurushankar, “Generic image application using GANs (Generative Adversarial Networks): A Review,” Evolving Systems. pp. 1–15, 30-Sep-2022, doi: 10.1007/s12530-022-09464-y.

[24] K. Yamamoto and K. Yanai, “Text-based Image Editing for Food Images with CLIP,” in Proceedings of the 7th International Workshop on Multimedia Assisted Dietary Management on Multimedia Assisted Dietary Management, 2022, pp. 29–37, doi: 10.1145/3552484.3555751.

[25] R. Gal et al., “An image is worth one word: Personalizing text-to-image generation using textual inversion,” arXiv preprint arXiv:2208.01618. pp. 1–26, 2022.

[26] G. Couairon, J. Verbeek, H. Schwenk, and M. Cord, “Diffedit: Diffusion-based semantic image editing with mask guidance,” arXiv preprint arXiv:2210.11427. pp. 1–21, 2022.

[27] K. Man and J. Chahl, “A Review of Synthetic Image Data and Its Use in Computer Vision,” J. Imaging, vol. 8, no. 11, pp. 1–33, Nov. 2022, doi: 10.3390/jimaging8110310.


Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Peter Ardhianto, PhD.

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

___________________________________________________________
International Journal of Visual and Performing Arts
ISSN 2684-9259
Published by Association for Scientific Computing Electronics and Engineering (ASCEE)
W: http://pubs2.ascee.org/index.php/viperarts
E: sularso@ascee.org
Organized by:

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0

Viperarts Stats