Interactive 3D Machine Showcase with Threepipe

Interactive 3D Machine Showcase with Threepipe
Interactive 3D Machine Showcase with Threepipe


Threepipe is a brand new framework for creating 3D internet purposes utilizing JavaScript or TypeScript. It offers a high-level API constructed on high of Three.js, providing a extra intuitive and environment friendly technique to develop 3D experiences for the online. Threepipe comes with a plugin system (and plenty of built-in plugins), making it straightforward to increase performance and combine numerous options into your 3D tasks.

On this tutorial, we’ll create an interactive 3D system mockup showcase utilizing Threepipe, that includes a MacBook and an iPhone mannequin, the place customers can work together with the mannequin by clicking and hovering over the objects, and drop pictures to show on the units. Take a look at the final version.

See the Pen
ThreePipe: Machine Mockup Experiment (Codrops) by Palash Bansal (@repalash).

This will additional be prolonged to create a full internet expertise to showcase web sites, designs, create and render mockups, and so forth. That is impressed by an previous three.js experiment to render customized system mockups – carbonmockups.com, which requires much more work when working with solely three.js from scratch. This tutorial will cowl organising the mannequin, animations in a no-code editor and utilizing code with predefined plugins so as to add consumer interactions for web sites.

Setting up the project

Codepen

You can quickly prototype in JavaScript on Codepen. Here is a starter pen with the basic setup: https://codepen.io/repalash/pen/GRbEONZ?editors=0010

Merely fork the pen and begin coding.

Local Setup

To get started with Threepipe locally, you need to have Node.js installed on your machine. Vite Projects require Node.js version 18+, so upgrade if your package manager warns about it.

  1. A new project can be quickly created using the npm create command. Open your terminal and run the following command:
npm create threepipe
  1. Follow the prompts:
    • Choose a project name (e.g., “device-mockup-showcase”)
    • Select “JavaScript” or “TypeScript” based on your preference
    • Choose “A basic scene” as the template
  2. This will create a basic project structure with a 3D scene using Threepipe and bundler setup using Vite.
  3. Navigate to your challenge listing, and run the challenge:
cd device-mockup-showcase
npm set up
npm run dev
  1. Open the challenge in your browser by visiting http://localhost:5173/ and you need to see a primary 3D scene.

Starter code

After creating a basic project, open the file src/main.ts.

This is a basic setup for a 3D scene using Threepipe that loads a sample 3D model of a helmet and an environment map(for lighting). The scene is rendered on a canvas element with the ID threepipe-canvas(which is added to the file index.html).

The ThreeViewer class is used to create a new 3D viewer instance. The viewer has several components including a Scene, Camera(with controls), Renderer, RenderManager, AssetManager, and some default plugins. It is set up to provide a quickstart to create a three.js app with all the required components. Additionally plugins like LoadingScreenPluginProgressivePluginSSAAPlugin, and ContactShadowGroundPlugin are added to extend the functionality of the viewer. We will add more plugins to the viewer for different use cases as we progress through the tutorial.

Check the comments in the code to understand what each part does.

import {
  ContactShadowGroundPlugin,
  IObject3D,
  LoadingScreenPlugin,
  ProgressivePlugin,
  SSAAPlugin,
  ThreeViewer
} from 'threepipe';
import {TweakpaneUiPlugin} from '@threepipe/plugin-tweakpane';

async function init() {

  const viewer = new ThreeViewer({
    // The canvas element where the scene will be rendered
    canvas: document.getElementById('threepipe-canvas') as HTMLCanvasElement,
    // Enable/Disable MSAA
    msaa: false,
    // Set the render scale automatically based on the device pixel ratio
    renderScale: "auto",
    // Enable/Disable tone mapping
    tonemap: true,
    // Add some plugins
    plugins: [
        // Show a loading screen while the model is downloading
        LoadingScreenPlugin,
        // Enable progressive rendering and SSAA
        ProgressivePlugin, SSAAPlugin,
        // Add a ground with contact shadows
        ContactShadowGroundPlugin
    ]
  });

  // Add a plugin with a debug UI for tweaking parameters
  const ui = viewer.addPluginSync(new TweakpaneUiPlugin(true));

  // Load an environment map
  await viewer.setEnvironmentMap('https://threejs.org/examples/textures/equirectangular/venice_sunset_1k.hdr', {
    // The environment map can also be used as the scene background
    setBackground: false,
  });

  // Load a 3D model with auto-center and auto-scale options
  const result = await viewer.load<IObject3D>('https://threejs.org/examples/models/gltf/DamagedHelmet/glTF/DamagedHelmet.gltf', {
    autoCenter: true,
    autoScale: true,
  });

  // Add some debug UI elements for tweaking parameters
  ui.setupPlugins(SSAAPlugin)
  ui.appendChild(viewer.scene)
  ui.appendChild(viewer.scene.mainCamera.uiConfig)

  // Every object, material, etc has a UI config that can be added to the UI to configure it.
  const model = result?.getObjectByName('node_damagedHelmet_-6514');
  if (model) ui.appendChild(model.uiConfig, {expanded: false});

}

init();

Creating the 3D scene

For this showcase, we’ll use 3D models of a MacBook and an iPhone. You can find free 3D models online or create your own using software like Blender.

These are two amazing models from Sketchfab that we will use in this tutorial:

Using the models, we’ll create a scene with a MacBook and an iPhone placed on a table. The user can interact with the scene by rotating and zooming in/out.

Threepipe provides an online editor to shortly create a scene and arrange plugin and object properties which might then be exported as glb and utilized in your challenge.

When the mannequin is downloaded from the editor, all of the settings together with the surroundings map, digital camera views, post-processing, different plugin settings, and so forth are included within the glb file. This makes it straightforward to load the mannequin within the challenge and begin utilizing it straight away.

For the tutorial, I’ve created and configured a file named device-mockup.glb which you’ll obtain from here. Take a look at the video beneath on the way it’s accomplished within the tweakpane editor – https://threepipe.org/examples/tweakpane-editor/

Adding the 3D models to the scene

To load the 3D model in the project, we can either load the file directly from the URL or download the file to the public folder in the project and load it from there.

Since this model includes all the settings, including the environment map, we can remove the environment map loading code from the starter code and load the file directly.

const viewer = new ThreeViewer({
  canvas: document.getElementById('threepipe-canvas') as HTMLCanvasElement,
  msaa: true,
  renderScale: "auto",
  plugins: [
    LoadingScreenPlugin, ProgressivePlugin, SSAAPlugin, ContactShadowGroundPlugin,
  ]
});

const ui = viewer.addPluginSync(new TweakpaneUiPlugin(true));

// Note - We dont need autoscale and center, since that is done in the editor already.
const devices = await viewer.load<IObject3D>('https://asset-samples.threepipe.org/demos/tabletop_macbook_iphone.glb')!;
// or if the model is in the public directory
// const devices = await viewer.load<IObject3D>('./models/tabletop_macbook_iphone.glb')!;

// Find the objects roots by name
const macbook = devices.getObjectByName('macbook')!
const iphone = devices.getObjectByName('iphone')!

const macbookScreen = macbook.getObjectByName('Bevels_2')! // the name of the object in the file
macbookScreen.name = 'Macbook Screen' // setting the name for easy identification in the UI.

console.log(macbook, iphone, macbookScreen);

// Add the object to the debug UI. The stored Transform objects can be seen and edited in the UI.
ui.appendChild(macbookScreen.uiConfig, {expanded: false})
ui.appendChild(iphone.uiConfig, {expanded: false})
// Add the Camera View UI to the debug UI. The stored Camera Views can be seen and edited in the UI.
ui.setupPluginUi(CameraViewPlugin, {expanded: false})
ui.appendChild(viewer.scene.mainCamera.uiConfig)

This code will load the 3D model in the scene and add the objects to the debug UI for tweaking parameters.

Plugins and animations

The file has been configured in the editor with several camera views(states) and object transform(position, rotation) states. This is done using the plugins CameraViewPlugin and TransformAnimationPlugin. To see the stored camera views and object transforms and interact with them, we need to add them to the viewer and the debug UI.

First, add the plugins to the viewer constructor

const viewer = new ThreeViewer({
   canvas: document.getElementById('threepipe-canvas') as HTMLCanvasElement,
   msaa: true,
   renderScale: "auto",
   plugins: [
      LoadingScreenPlugin, ProgressivePlugin, SSAAPlugin, ContactShadowGroundPlugin,
      CameraViewPlugin, TransformAnimationPlugin
   ]
});

Then at the end, add the CameraViewPlugin to the debug UI

ui.setupPluginUi(CameraViewPlugin)

We don’t need to add the TransformAnimationPlugin to the UI since the states are mapped to objects and can be seen in the UI when the object is added.

We can now interact with the UI to play the animations and animate to different camera views.

Transform states are added to two objects in the file, the MacBook Screen and the iPhone. 

The camera views are stored in the plugin and not with any object in the scene. We can view and animate to different camera views using the plugin UI. Here, we have two sets of camera views, one for the desktop and one for the mobile (with different FoV/Position)

User Interaction

Now that we have the scene set with the models and animations, we can add user interaction to the scene. The idea is to slightly tilt the model when the user hovers over it and fully open it when clicked, along with animating the camera views. Let’s do it step by step.

For the interaction, we can use the PickingPlugin which provides events to handle hover and click interactions with 3D objects in the scene.

First, add PickingPlugin to the viewer plugins

plugins: [
   LoadingScreenPlugin, ProgressivePlugin, SSAAPlugin, ContactShadowGroundPlugin,
   CameraViewPlugin, TransformAnimationPlugin, PickingPlugin
]

With this, we can now click on any object in the scene and it will be highlighted with a bounding box.

Now, we can configure the plugin to hide this box and subscribe to the events provided by the plugin to handle the interactions.

// get the plugin instance from the viewer
const picking = viewer.getPlugin(PickingPlugin)!
const transformAnim = viewer.getPlugin(TransformAnimationPlugin)!

// disable the widget(3D bounding box) that is shown when an object is clicked
picking.widgetEnabled = false

// subscribe to the hitObject event. This is fired when the user clicks on the canvas.
picking.addEventListener('hitObject', async(e) => {
   const object = e.intersects.selectedObject as IObject3D
   // selectedObject is null when the user clicks the empty space
   if (!object) {
       // close the macbook screen and face down the iphone
      await transformAnim.animateTransform(macbookScreen, 'closed', 500)?.promise
      await transformAnim.animateTransform(iphone, 'facedown', 500)?.promise
      return
   }
   // get the device name from the object
   const device = deviceFromHitObject(object)
   // Change the selected object to the root of the device models. This is used by the widget or other plugins like TransformControlsPlugin to allow editing.
   e.intersects.selectedObject = device === 'macbook' ? macbook : iphone

   // Animate the transform state of the object based on the device name that is clicked
   if(device === 'macbook')
      await transformAnim.animateTransform(macbookScreen, 'open', 500)?.promise
   else if(device === 'iphone')
      await transformAnim.animateTransform(iphone, 'floating', 500)?.promise
})

Here, the animateTransform function is used to animate the transform state of the object. The function takes the object, the state name, and the duration as arguments. The promise returned by the function can be used to wait for the animation to complete.

The deviceFromHitObject function is used to get the device name from the object clicked. This function traverses the parents of the object to find the device model.

function deviceFromHitObject(object: IObject3D) {
   let device = ''
   object.traverseAncestors(o => {
      if (o === macbook) device = 'macbook'
      if (o === iphone) device = 'iphone'
   })
   return device
}

With this code, we can now interact with the scene by clicking on the models to open/close the MacBook screen and face down/floating the iPhone.

Now, we can add camera animations as well to animate to different camera views when the user interacts with the scene.

Get the plugin instance

const cameraView = viewer.getPlugin(CameraViewPlugin)!

Update the listener to animate the views using the animateToView function. The views are named ‘start’, ‘macbook’, and ‘iphone’ in the plugin.

const object = e.intersects.selectedObject as IObject3D
if (!object) {
   await Promise.all([
      transformAnim.animateTransform(macbookScreen, 'closed', 500)?.promise,
      transformAnim.animateTransform(iphone, 'facedown', 500)?.promise,
      cameraView.animateToView('start', 500),
   ])
   return
}
const device = deviceFromHitObject(object)
if(device === 'macbook') {
   await Promise.all([
     cameraView.animateToView('macbook', 500),
     await transformAnim.animateTransform(macbookScreen, 'open', 500)?.promise
   ])
}else if(device === 'iphone') {
   await Promise.all([
     cameraView.animateToView('iphone', 500),
     await transformAnim.animateTransform(iphone, 'floating', 500)?.promise
   ])
}

This would now also animate the camera to the respective views when the user clicks on the models.

In the same way, PickingPlugin provides an event hoverObjectChanged that can be used to handle hover interactions with the objects.

This is pretty much the same code, but we are animating to different states(with different durations) when the user hovers over the objects. We don’t need to animate the camera here since the user is not clicking on the objects.

// We need to first enable hover events in the Picking Plugin (disabled by default)
picking.hoverEnabled = true

picking.addEventListener('hoverObjectChanged', async(e) => {
   const object = e.object as IObject3D
   if (!object) {
      await Promise.all([
         transformAnim.animateTransform(macbookScreen, 'closed', 250)?.promise,
         transformAnim.animateTransform(iphone, 'facedown', 250)?.promise,
      ])
      return
   }
   const device = deviceFromHitObject(object)
   if(device === 'macbook') {
      await transformAnim.animateTransform(macbookScreen, 'hover', 250)?.promise
   }else if(device === 'iphone') {
      await transformAnim.animateTransform(iphone, 'tilted', 250)?.promise
   }
})

On running this, the MacBook screen will slightly open when hovered over and the iPhone will slightly tilt.

Drop files

To allow users to drop images to display on the devices, we can use the DropzonePlugin provided by Threepipe. This plugin allows users to drag and drop files onto the canvas and handle the files in the code.

The plugins can be set up by simply passing dropzone property in the ThreeViewer constructor. The plugin is added and set up automatically.

Let’s set some options to handle the images dropped on the canvas.

const viewer = new ThreeViewer({
  canvas: document.getElementById('threepipe-canvas') as HTMLCanvasElement,
  // ...,
  dropzone: {
    allowedExtensions: ['png', 'jpeg', 'jpg', 'webp', 'svg', 'hdr', 'exr'],
    autoImport: true,
    addOptions: {
      disposeSceneObjects: false,
      autoSetBackground: false,
      autoSetEnvironment: true, // when hdr, exr is dropped
    },
  },
  // ...,
});

We are setting autoSetEnvironment to true here, which will automatically set the environment map of the scene when an HDR or EXR file is dropped on the canvas. This way a user can drop their own environment map and it will be used for lighting.

Now, to set the dropped image to the devices, we can listen to the loadAsset event of the AssetManager and set the image to the material of the device screen. This event is called since the DropzonePlugin also automatically imports as a three.js Texture object and loads the file in the asset manager. To get more control, you can also subscribe to the events in the DropzonePlugin and handle the files yourself.

// Listen to when a file is dropped
viewer.assetManager.addEventListener('loadAsset', (e)=> !iPhoneScreen) return
  mbpScreen.color.set(0,0,0)
  mbpScreen.emissive.set(1,1,1)
  mbpScreen.roughness = 0.2
  mbpScreen.metalness = 0.8
  mbpScreen.map = null
  mbpScreen.emissiveMap = texture
  iPhoneScreen.emissiveMap = texture
  mbpScreen.setDirty()
  iPhoneScreen.setDirty()
)

This code listens to the loadAsset event and checks if the loaded asset is a texture. If it is, it sets the texture to the material of the MacBook and iPhone screens. The texture is set as the emissive map of the material to make it glow. The emissive color is set to white to make the texture visible. The changes in the material need to be done only in the Macbook screen material and not the iPhone, since iPhone material setup was done in the editor directly.

Final touches

While interacting with the project, you might notice that the animations are not properly synced. This is because the animations are running asynchronously and not waiting for the previous animation to complete.

To fix this, we need to maintain a state properly and wait for any animations to finish before changing it.

Here is the final code with proper state management and other improvements in typescript. The JavaScript version can be found on Codepen.

import {
  CameraViewPlugin, CanvasSnapshotPlugin,
  ContactShadowGroundPlugin,
  IObject3D, ITexture,
  LoadingScreenPlugin, PhysicalMaterial,
  PickingPlugin,
  PopmotionPlugin, SRGBColorSpace,
  ThreeViewer,
  timeout,
  TransformAnimationPlugin,
  TransformControlsPlugin,
} from 'threepipe'
import {TweakpaneUiPlugin} from '@threepipe/plugin-tweakpane'

async operate init() {

  const viewer = new ThreeViewer({
    canvas: doc.getElementById('threepipe-canvas') as HTMLCanvasElement,
    msaa: true,
    renderScale: 'auto',
    dropzone: {
      allowedExtensions: ['png', 'jpeg', 'jpg', 'webp', 'svg', 'hdr', 'exr'],
      autoImport: true,
      addOptions: {
        disposeSceneObjects: false,
        autoSetBackground: false,
        autoSetEnvironment: true, // when hdr, exr is dropped
      },
    },
    plugins: [LoadingScreenPlugin, PickingPlugin, PopmotionPlugin,
      CameraViewPlugin, TransformAnimationPlugin,
      new TransformControlsPlugin(false),
      CanvasSnapshotPlugin,
      ContactShadowGroundPlugin],
  })

  const ui = viewer.addPluginSync(new TweakpaneUiPlugin(true))

  // Mannequin configured within the threepipe editor with Digital camera Views and Remodel Animations, test the tutorial to study extra.
  // Contains Models from Sketchfab by timblewee and polyman Studio and HDR from polyhaven/threejs.org
  // https://sketchfab.com/3d-models/apple-iphone-15-pro-max-black-df17520841214c1792fb8a44c6783ee7
  // https://sketchfab.com/3d-models/macbook-pro-13-inch-2020-efab224280fd4c3993c808107f7c0b38
  const units = await viewer.load<IObject3D>('./fashions/tabletop_macbook_iphone.glb')
  if (!units) return

  const macbook = units.getObjectByName('macbook')!
  const iphone = units.getObjectByName('iphone')!

  const macbookScreen = macbook.getObjectByName('Bevels_2')!
  macbookScreen.identify = 'Macbook Display screen'

  // Canvas snapshot plugin can be utilized to obtain a snapshot of the canvas.
  ui.setupPluginUi(CanvasSnapshotPlugin, {expanded: false})
  // Add the thing to the debug UI. The saved Remodel objects may be seen and edited within the UI.
  ui.appendChild(macbookScreen.uiConfig, {expanded: false})
  ui.appendChild(iphone.uiConfig, {expanded: false})
  // Add the Digital camera View UI to the debug UI. The saved Digital camera Views may be seen and edited within the UI.
  ui.setupPluginUi(CameraViewPlugin, {expanded: false})
  ui.appendChild(viewer.scene.mainCamera.uiConfig)
  ui.setupPluginUi(TransformControlsPlugin, {expanded: true})

  // Take heed to when a picture is dropped and set it because the emissive map for the screens.
  viewer.assetManager.addEventListener('loadAsset', (e)=> !iPhoneScreen) return
    mbpScreen.coloration.set(0,0,0)
    mbpScreen.emissive.set(1,1,1)
    mbpScreen.roughness = 0.2
    mbpScreen.metalness = 0.8
    mbpScreen.map = null
    mbpScreen.emissiveMap = texture
    iPhoneScreen.emissiveMap = texture
    mbpScreen.setDirty()
    iPhoneScreen.setDirty()
  )

  // Separate views are created within the file with completely different digital camera fields of view and positions to account for cell display.
  const isMobile = ()=>window.matchMedia('(max-width: 768px)').matches
  const viewName = (key: string) => isMobile() ? key + '2' : key

  const transformAnim = viewer.getPlugin(TransformAnimationPlugin)!
  const cameraView = viewer.getPlugin(CameraViewPlugin)!

  const selecting = viewer.getPlugin(PickingPlugin)!
  // Disable widget(3D bounding field) within the Choosing Plugin (enabled by default)
  selecting.widgetEnabled = false
  // Allow hover occasions within the Choosing Plugin (disabled by default)
  selecting.hoverEnabled = true

  // Set preliminary state
  await transformAnim.animateTransform(macbookScreen, 'closed', 50)?.promise
  await transformAnim.animateTransform(iphone, 'facedown', 50)?.promise
  await cameraView.animateToView(viewName('begin'), 50)

  // Monitor the present and the following state.
  const state = {
    centered: '',
    hover: '',
    animating: false,
  }
  const nextState = {
    centered: '',
    hover: '',
  }
  async operate updateState() {
    if (state.animating) return
    const subsequent = nextState
    if (subsequent.centered === state.centered && subsequent.hover === state.hover) return
    state.animating = true
    const isOpen = state.centered
    Object.assign(state, subsequent)
    if (state.centered) {
      await Promise.all([
        transformAnim.animateTransform(macbookScreen, state.focused === 'macbook' ? 'open' : 'closed', 500)?.promise,
        transformAnim.animateTransform(iphone, state.focused === 'iphone' ? 'floating' : 'facedown', 500)?.promise,
        cameraView.animateToView(viewName(state.focused === 'macbook' ? 'macbook' : 'iphone'), 500),
      ])
    } else if (state.hover) {
      await Promise.all([
        transformAnim.animateTransform(macbookScreen, state.hover === 'macbook' ? 'hover' : 'closed', 250)?.promise,
        transformAnim.animateTransform(iphone, state.hover === 'iphone' ? 'tilted' : 'facedown', 250)?.promise,
      ])
    } else {
      const length = isOpen ? 500 : 250
      await Promise.all([
        transformAnim.animateTransform(macbookScreen, 'closed', duration)?.promise,
        transformAnim.animateTransform(iphone, 'facedown', duration)?.promise,
        isOpen ? cameraView.animateToView(viewName('front'), duration) : null,
      ])
    }
    state.animating = false
  }
  async operate setState(subsequent: typeof nextState) {
    Object.assign(nextState, subsequent)
    whereas (state.animating) await timeout(50)
    await updateState()
  }

  operate deviceFromHitObject(object: IObject3D) {
    let system = ''
    object.traverseAncestors(o => {
      if (o === macbook) system = 'macbook'
      if (o === iphone) system = 'iphone'
    })
    return system
  }

  // Fired when the present hover object adjustments.
  selecting.addEventListener('hoverObjectChanged', async(e) => {
    const object = e.object as IObject3D
    if (!object) {
      if (state.hover && !state.centered) await setState({hover: '', centered: ''})
      return
    }
    if (state.centered) return
    const system = deviceFromHitObject(object)
    await setState({hover: system, centered: ''})
  })

  // Fired when the consumer clicks on the canvas.
  selecting.addEventListener('hitObject', async(e) => {
    const object = e.intersects.selectedObject as IObject3D
    if (!object) {
      if (state.centered) await setState({hover: '', centered: ''})
      return
    }
    const system = deviceFromHitObject(object)
    // change the chosen object for remodel controls.
    e.intersects.selectedObject = system === 'macbook' ? macbook : iphone
    await setState({centered: system, hover: ''})
  })

  // Shut all units when the consumer presses the Escape key.
  doc.addEventListener('keydown', (ev)=>{
    if (ev.key === 'Escape' && state.centered) setState({hover: '', centered: ''})
  })

}

init()

Right here, we’re sustaining the state of the scene and ready for the animations to finish earlier than altering the state. This ensures that the animations are correctly synced and the consumer interactions are dealt with accurately. Since we’re utilizing a single nextState, solely the final interplay is taken into account and the earlier ones are ignored.

Additionally CanvasSnapshotPlugin and TransformControlsPlugin are added to the viewer to permit customers to take snapshots of the canvas and transfer/rotate the units on the desk. Verify the debug UI for each the plugins.

Take a look at the complete challenge on Codepen or Github and mess around with the scene.

Codepen: https://codepen.io/repalash/pen/ExBXvby?editors=0010 (JS)

Github: https://github.com/repalash/threepipe-device-mockup-codrops (TS)

Next Steps

This tutorial covers the basics of creating an interactive 3D device mockup showcase using Threepipe. You can further enhance the project by adding more models, animations, and interactions.

Extending the model can be done in both the editor or in the code. Checkout the Threepipe website for extra.

Listed below are some concepts to increase the challenge:

  • Add some post-processing plugins like SSAO, SSR, and so forth to reinforce the visuals.
  • Create a customized surroundings map or use a unique HDR picture for the scene.
  • Add extra 3D fashions and create a whole 3D surroundings.
  • Embed an iframe within the scene to show an internet site or a video immediately on the system screens.
  • Add video rendering to export 3d mockups of UI designs.

Leave a Reply

Your email address will not be published. Required fields are marked *