Creating an embodied virtual agent is often a complex process. It involves 3D modeling and animation skills, advanced programming knowledge, and in some cases artificial intelligence or the integration of complex interaction models. Features like lip-syncing to an audio file, recognizing the users’ speech, or having the character move at certain times in certain ways, are inaccessible to researchers that want to build and use these agents for education, research, or industrial uses. VAIF, the Virtual Agent Interaction Framework, is an extensively documented system that attempts to bridge that gap and provide inexperienced researchers the tools and means to develop their own agents in a centralized, lightweight platform that provides all these features through a simple interface within the Unity game engine. In this paper we present the platform, describe its features, and provide a case study where agents were developed and deployed in mobile-device, virtual-reality, and augmented-reality platforms by users with no coding experience.