You are on Emiliusvgs! Since version 76, Spark AR has renewed its creation templates They offer interesting proposals of augmented reality Although, let’s face it, very few review it and you can say: boring or easy peasy what happens is that the templates contain many interesting and important examples, although most people overlook It’s Like when you buy a new grill and you don’t want to read the instructions things can take longer In the following minutes, I will explain the relevance of these templates to work quickly on Spark AR projects. Let’s start! As you will see, when we open Spark AR we see 8 templates We will enter the most relevant. Let’s go first with the “Face Decoration” When opening we will find a simple instruction indicating that we must replace an element We will add elements within “drag here” and “delete me” is clearly the file that we must delete to add ours. This template explains how we can add a 3d object on our face like lenses and make them looks very realistic We also have the patch editor that already explains interesting details about the filter for example: how lenses are positioned in the nose, and that when we open the mouth the head occluder must move naturally with the corrrect scale Of course if we go to see the head occluder and its material we will see that the 3D object is present with its opacity to ZERO This is the important file Now we will add another type of lens called “round glasses” Inside the null object called: “drag here” Obviously we will adjust the file We can also change the color Now we will use the “World Object” template This is ideal for creating examples linked to positioning objects on the ground or rather “on a horizontal surface” These templates show their properties within the patch editor Here we see how they use the plane tracker and how to interact with it I mean: how use the screen pan, screen rotate and pinch. We also have found “blocks” connected with other elements for proper placement on surfaces. Now we are going to import a 3d file from Sketchfab We will look for a “dog” for example we have this which is “dog pooping” We will delete the previous file and place the new one in the hierarchy “drag here” The patch editor has many elements added I recommend that you analyze it one by one well, so that it does not rotate the 3d file we will deactivate the “spinning” option within the Quick Anim block Let’s try it to see how it looks in the environment Now we are going to use a much requested template It is the Makeup one And I think it’s a super useful template because there are many mistakes that people generate when they are creating makeup from a graphics editor like photoshop With this template you can make color adjustments for each relevant part of the face (lips, eyes, etc) The interesting thing about these templates is that we can edit and understand the block like this We will open another Spark AR file and see what it is composed of I am not a professional in virtual makeup 🙁 Now we will use another template called “background” We are not going see much because it is a very common job to many Here we will learn to add an interactive background to our filter If you want to know more about the subject I recommend my videotutorial about segmentation. Here you will find all the necessary information to create this type of filters We will work now with Color Filter It is a template that will allow us to change color, especially brightness, contrast, saturation and others We can double click the patch asset to understand how it works It is a useful work Neck Decoration: this is a very important template because it has elements related to the use of ornaments on the neck such as angel wings, necklaces and more The patch is of vital importance to understand its operation Ideally you should check each element of the patch in time because what is being done here is that the 3d object is positioned correctly in the neck and does not have unusual movements when moving the head Ahh! and of course I also recommend that you review the script of that template Now let’s see the 3d occluder file that is the key object for this effect If we increase the opacity we will see that the object is composed of a face and a neck If you want to know more occlusion topics you can see my tutorial on Portals because it will help you to understand how it works I hope you like this short guide on the templates that Spark AR has I have very interesting upcoming tutorials so I recommend you subscribe and of course if you like this video send me a good thumbs up This is all for now Emiliusvgs says goodbye bye!!!
Hi, what’s up! Welcome to Emiliusvgs! the beginning of the year 2020 has been very interesting for me because there was a very particular boom regarding the use of random sequential image filters For example, what Disney character are you? What meme are you? 2020 Predictions and Next vacation? I appreciate the mention of Filippo, the creator of 2020 predictions. He learned from my tutorial to make his filter and published my video on several platforms! That helped me a lot! Thanks Filippo. Now I will show you the third part of the tutorial so you can create more interesting filters If you are new in Sparl AR, welcome! it’s time to keep growing. Let’s start! We open spark AR from where we stayed in the last tutorial. As always the files are in my Patreon and I recommend that you watch my previous videos on the subject. In this video you will learn to add delay when you activate the video recording, that is to say, it will take time for the first image to disappear. We will also see how to use the effect on two people and finally, I will explain how to add an element as particles when the animation stops. I mentioned the issue of adding delay on my website. But, I know that not everyone enters my website because it’s in Spanish, so I will teach you right now. We have to add the elements that are attached to the camera; and we will add within the patch, the “delay” … In duration we put 2 seconds; and of course we have to modify the data in “less than” because that will affect the whole experience. If I want the image sequence to stop at the second 6, en “less than” I should go down 4. This function is fully customizable. Well, now we will connect the delay with the pulse and the sequence 1. If for some reason you get an error, try to refresh the project. We’re going to try it! The second tip is to make the filter can be used in two people, that is, the sequence of images can appear in several heads. If you notice, all the effect is within the face tracker. We are going to duplicate this face tracker to have two. If you notice, our first face tracker has a tracker face number 1, therefore in our second face tracker, we will change your “tracker face” to “face 2”. By duplicating the face tracker you also duplicate everything that has been worked on within it. You will find the null object, sequence 1 and the other plane. I recommend you change the names to have a better order. As you see in the patch, there is no related element of the new face tracker.
let’s do it! First we will add within the patch, sequence 1 of face tracker 2, we will connect the same elements Now we need to add the “transition position” plane inside the patch; and we will connect it this way. If you finish as your project is now, the same result will appear on both faces. That is not ideal. We will have to modify or add another animation sequence. Fortunately, I have another texture available as a sequence of images called “1 to 4”. So we will create an animation sequence, and in its texture we will add our file 1 to 4. We make some adjustments. We will not be alone there, because our second “sequence 1”, we will create a new material where the new image sequence will be hosted. In the texture of the new material we will add the animation sequence 1, which is the new one. And finally we will add the current frame of animation sequence 1 inside the patch, in this way. Ideally, the two sequence of images have the same number of images so that it has the same dynamics. Lets try it… Finally, we are going to add a particle to appear when stopping the specific second This is a coincidence of time. Let’s add a particle. I can add it into the two face mesh but for the example, I’ll leave it out. After a few seconds the particles will become visible for that we need to add the “delay” first, we can copy the delay we already have. Now we must coordinate so that the particle comes out when the sequence stops. My random image sequence stops at the second 4.8 so this new delay will last 4.8. Then we will add the visible element of the particle We will connect it. then we can customize The particle can be replaced by a gif, a 3d object, a message, as many things as possible to make your idea more relevant and differential. I hope you liked this tutorial that groups interesting tips. Remember that using the random image sequence is not just to create predictions it is used to generate many interactive elements This is all for now, Emiliusvgs says goodbye byee
Hi! Welcome to Emiliusvgs. Today we will continue talking about Facial Extraction My previous video about it had a good reception On this occasion I will talk about two types of face extraction that you asked me a lot First: Use your face in a static image The second: how to add your face in a 3d object Let’s Start! We will continue where we stay in my last video (part I) I recommend you watch my previous video so you can follow the process. Of course I attach my files in Patreon In Spark AR we have the example of face extraction, where the face moves the whole scene Many users wanted to know how to maintain the static planes and that only move the face with a certain limit That is possible, although you have to be careful with this technique because the biggest element is static and we already know that Spark AR does not like “static elements” that is why we are going to add a dynamic you will see 😉 We are going to add a null object to group the “plane” and “face mesh” and that way we will separate them from the “face tracker” that is the parent element in the current hierarchy We group them and add them into that null object. Automatically the face and the plane were fixed behind is the plane of the universe (texture) and the face tracker was left out but this element is necessary for the filter to work without it, this effect would not exist You must always position the face mesh properly so that the illusion that our face corresponds to the dimensions of the body of the image We will continue working because I do not want only static elements to exist We are going to add a simple animation to move the whole null object I will use spin animation If you realize it is connected to the rotation of that null object In this case I will use the spin on the Z axis I can change the animation loop from 1 to 3 seconds to make it slower Now to work some variations I will change “loop animation” to “animation” for more control I keep the seconds at 3 and we will add a screen tap to start the interaction Let’s look at the operation in the video preview We will make another animation to move it away from the camera so we will copy the previous patch animation and transition Then, we will connect the “position” value of the null object for this example On the Z axis I will put -5 and we will reset the experience I tap the screen and watch the video preview We can continue making adjustments to test how the experience would be If we want the effect to come and go without problems we must add a switch to be the driver between screen tap and animation Automatically when we add the switch the “pulse” will appear Now we will connect the “turn off” of the pulse with the “reverse” of the “animation”. We will do the same with the other animation element. We can optimize the filter with a single “switch” and “pulse” Let’s see how it looks on the device hi! we are going to test this filter I get away I want to get closer I’m arriving. I am Here some tweaks are needed, but the basic filter is there This image that I chose is perhaps not the most appropriate because the image is looking in a non-frontal way this image is looking frontally Now we are going to add a 3d object to place our face so we must choose a human-looking 3d file And what better if it has animation We can use a 3d file of sketchfab but it is not appropriate. I recommend that you work your own file so that the space of the face fits well with the face mesh I will use this file: “lola samba dancing” When importing it, do not forget to add your animation Now we will add the face mesh that we work within the correct hierarchy that is close to the head, above the neck or related elements We will use the skeleton as a reference point Until we find the “Head_06” junction We drag the face mesh up there Look it out We can hide the null object so that the 3d object has more relevance Then, we have to edit so that our face fits perfectly but with this example it will not be so precise because it does not have an adequate dimension a color similar to my face, etc, etc, etc… We will use another 3d object we return our face mesh to the null objetct and delete that 3d object Now we will import the other file like this: fornite 3d file We will adjust its scale we are looking for the right place to add the face mesh The right place is “helmet_070” We will adjust the face to make it perfect Once we have everything in place we will not forget to add the animation of the 3d object We will test it I hope this tutorial has served you. Let’s keep learning to make augmented reality with Spark AR. Don’t forget to subscribe and follow me on instagram This is all for now Emiliusvgs says goodbye bye!
Hi! how are you? Welcome to Emiliusvgs Adobe Aero just arrived to iOS and has generated a lot of interest from the community of developers and artists and this why? Because Adobe Aero allows use the full Adobe suite, for example Photoshop, Ilustrator and Adobe Dimension to create inmersive experience of Augmented Reality without coding This is a great idea because Adobe already had a large community of designers and artists and now putting a tool of augmented reality can generate a lot of synergy But you have to know, that this is not the first tool about it For example, Reality Composer has been appear some months ago Will Adobe Aero be better than that tool? Is Adobe Aero related to Spark AR? Learn more about it in this video On November 4, Adobe Aero was announced at the Adobe Max conference This as part of Adobe Creative Cloud. Aero is a tool that allows designers build and share immersive experiences in AR without know how to program The main keyword here is SHARE You can share the work we have done through a video, a link preview and even export it as aero experience reality file by reality composer and USDZ (The extension file by excellence of augmented reality in iOS). Adobe Aero is a free mobile iOS application for phones and tablets. Adobe’s premise is: Expand your creative canvas. For example, you can use the same layers that you work in Photoshop to put it into augmented reality and create a great immersive environment. Adobe Aero has a powerful technology, for example surface anchor (or surface tracking) is very accurate and allows you to create realistic experiences. It also has intelligent lighting, with this we will obtain real results in a simple way. If you want to use your files you will find it in adobe creative cloud or have it previously in your folder This is a great initiative that will allow many artists work with Augmented Reality To my it as a tool to prototype experiences which will undoubtedly, it will generate a great boost Let’s go to the Adobe Aero introduction tutorial When we enter we need to find a surface to create the experience. That’s why I prefer to focus the floor Now we will add a 3d object The adobe aero tutorial tells us to select the astronaut’s toy. Once selected, we will wait for it to load the environment. Now we can scale or move it within the experience with ease. then we will add the other 3d file that will be the background of the experience … We need to enter behaviors to generate simple movements or interactions. We will use the “touch” interaction and the action will be “spin”. Once we have all the experience we need we will select “preview” and look at the augmented reality Now, I will make a change about animation This time, I want that when we tap or touch the astronaut bounce this way with a very interesting but simple animation we’re going to try it One of the experiences that I find simply great is to create animations of static objects. I mean, you can create animations related to the movement path and have as a route as this wooden helicopter does We can use this other animated object and launch its default action The graphic quality is very good Adobe Aero supports glb, obj, dae, fbx and many more … Let’s make a small example to demonstrate the “lighting” let’s add a tree and also small objects around We can add animations, but I want demonstrate: how this tree, if we enlarge it, is projected a natural shadow that gives the feeling of having interacted with the environment An interesting feature Making this initial or preliminary video of Adobe Aero I can conclude that it is a great tool but it is still in the process of being very powerful as Reality Composer (which is much more technological. It provides many more services and functionalities) Adobe Aero has a great dynamic because it allows Sharing. That way of sharing as link url or a video is fantastic; and outperforms other tools. But it doesn’t beat Spark AR because Spark AR has a large community of end users that use filters We are at a very interesting and decisive stage where there are many platforms or tools for the end user We must wait the next few weeks to measure the evolution of these technologies If you liked this video, please share it, give it a “thumbs up” and subscribe to generate more related content about Adobe Aero That’s all for now. Emiliusvgs says goodbye byeeee!