ISO-690 (author-date, English)

KIM, Heejin, LEE, Jeongha und BAHN, Hyokyung, 2025. Rethinking I/O Caching for Large Language Model Inference on Resource-Constrained Mobile Platforms. Mathematics (2227-7390). 15 November 2025. Vol. 13, no. 22, p. 3689-3705. DOI 10.3390/math13223689.

Elsevier - Harvard (with titles)

Kim, H., Lee, J., Bahn, H., 2025. Rethinking I/O Caching for Large Language Model Inference on Resource-Constrained Mobile Platforms. Mathematics (2227-7390) 13, 3689-3705. https://doi.org/10.3390/math13223689

American Psychological Association 7th edition

Kim, H., Lee, J., & Bahn, H. (2025). Rethinking I/O Caching for Large Language Model Inference on Resource-Constrained Mobile Platforms. Mathematics (2227-7390), 13(22), 3689-3705. https://doi.org/10.3390/math13223689

Springer - Basic (author-date)

Kim H, Lee J, Bahn H (2025) Rethinking I/O Caching for Large Language Model Inference on Resource-Constrained Mobile Platforms.. Mathematics (2227-7390) 13:3689-3705. https://doi.org/10.3390/math13223689

Juristische Zitierweise (Stüber) (Deutsch)

Kim, Heejin/ Lee, Jeongha/ Bahn, Hyokyung, Rethinking I/O Caching for Large Language Model Inference on Resource-Constrained Mobile Platforms., Mathematics (2227-7390) 2025, 3689-3705.

Bitte prüfen Sie die Zitate auf Korrektheit, bevor Sie diese in Ihre Arbeit einfügen.