my name is Shahab Behzumi
and I am a video artist and hyperlapse photographer The Beautiful Art of Hyperlapse in this clip we will take a closer look at the art of hyperlapse and go into the possibilities of digital
(post-processing/post-production/) editing in detail what is hyperlapse? what does it mean? many understand the term as a novel time lapse technique in which you focus on an object while moving the camera in between shots, to expand classical static time lapse photography, with the 3rd dimension usually this is realized by using motorized camera sliders, these however lack the flexibility and the length of camera flights, you can achieve when hyperlapsing but hyperlapse is a lot more than that you could look at it as a form of meditation or even as a sport because similarly to archery, you have to focus on a certain point and moment because it enhances concentration and body control it might even be useful as therapy or be used as a group activity the real beauty of hyperlapsing is that you can visualize your every step, and share the result with those around you hyperlapsing can be done by all ages and doesnt require a lot of money to get started when i started, i drew a crosshair on my camera display with permanent marker to aim at my objects and it worked excellently fortunately most cameras have grids in live view and their viewfinders to aid (facilitate) aiming precisely at your objects there are hyperlapses in all different variations and all different applications using a camera stand or freehand with compact cameras or heavy equipment some can take along time and require hours our days some are fast and quick and can be completed in mere minutes there really are no limits to your creativity and experimentation a bit later in the video we will go into macro hyperlapsing and take a scenic flight to oliver kmia, who engaged intesively in drone hyperlapsing in the last years but first we will perform a shorter hyperlapse which I recorded specifically for this video in my hometown of kaiserslautern the very first stop is choosing an object sometimes google earth can be very useful to get a first impression of the environment around the desired object this way you can immediately identify obstacles around the object, which would later obstruct your view to get started we choose a free standing building, that has no obstactles standing around it whatsoever we walk along our planned path and also already choose a point on the object itself, a target point that stays visible during the whole time and ideally will not be covered by other parts of the object, due to the change of persepective during the actual hyperlapse photography in this very case it seems to be white square surface on the roof with the antennas on it Because they will be very useful later for the tracking process The white square surface is detected brilliantly by the autofocus So I can stay focused solely on aiming and my footsteps Before we get started with the actual hyperlapse, we will adjust some settings on the camera. These are the 3 basic adjustments for shutterspeed , aperature, and ISO Whereby we should really set the aperature to a value, which remains identical, no matter how far we zoom in or out the lense. In addition, we add the virtual horizon to our liveview but not only there, with the function button as well in our viewfinder With this method you can look through the viewfinder even while hyperlapsing and evaluate if you are still in horizontal level without loosing sight of the object according to wether we go handheld or with a tripod we switch the vibration reduction of our lense ON respectively OFF When performing a handheld hyperlapse, the vibration reduction is partially really useful but when hyperlapsing with a tripod in opposite, the vibration reduction leads mostly to shaky footage and upon long exposures with ND Filters, the vibration reduction is completely inadequate Especially with hyperlapses of gradually increasing focal length, it is very important to determine the proper position of the focus measuring scan field of view, BEFORE we begin! An afterwards adjusted position change, is mostly difficult and awkward. That´s why it is advisable to exactly define the visible parts of the object from the beginning of the end of the composition beforehand It happened to me more than enough, that I accidentally touched the D-Pad of the camera, and thereby changed the postion of the focus measuring field, and as a result, I aimed exactly next to the target the rest of the time, without noticing. Generally, we can say: The position of the focus measuring fields is something we should always care about. Well, If the settings are all right, the batteries are charged, and the interval between exposures is chosen, We tare getting started with our first steps Here we walk from the left side to the right side, across a distance of about 180 meters the object, about 1km away, remains always in our view. picture by picture, we always do comfortable but consistent step to the right while looking through the viewfinder Because we are relatively far away from the object, our steps shouldn´t be to close to each other, As we really want to see a perspective change in the endresult. But Careful! Don´t overdo with the distances Otherwise we will reach the end of our accessible pathway, or getting out of sight of the object, but without having taken enough pictures, so that our hyperlapse will be way too short. Everytime, when we move the measuring square over the white surface, we press and hold the exposurebutton half way until the VR activates and the lense focuses When we aim accurate enough, we hold our breath, and instantly shoot. step one step to th reight, zoom in slightly, buckle on, and repeat the previous process, until we reach the end of our track Thanks to the vibration reduction, the aiming comes really easy in this case, even without a tripod, so that we can stick to our interval of about 7 seconds almost consistantly Upon freehand hyperlapses I discourage to use an intervalometer beacause the time varies a tiny bit from picture to picture Your arms remain half contracted alike a shooting position and should move smooth and equably if possible Generally should the movemets be performed in a way, so that they are kind of uniform so before we start with a certain velocity, we should insure ourselfes, that we can keep up this speed constantly, also for several minutes or even hours. The same can be said about tripod hyperlapsing. Except the difference, that you can handle way longer timescales, and distances, than with the handheld method. This method automatically implies an overall calmer and more precice approach. Whereas when handheld, you more or less move the measuring field smoothly over the target point… and then expose in the best moment. Here with the tripod, before each exposure, everything will be adjusted accurately, will be verified and then executed remotely. Also: The use of an intervalometer and ND filters, or the use of the mirror-lockup function can be possible and senseful in this case. One should calculate generally more time between the shots, in order to stay in the rythm. I would say around 15-20 seconds are neat values in order to get familiar with it. I would choose adjustable gearheads, beacuse the play between the movable parts, will not affect our aiming Here you do not have to manually lock anything, after painstakingly adjusting and finetuning the aimpoint.. because this would mess it all up again. While I prefer using the viewfinder when I perform a freehand hyperlapse, I think, the liveview mode is the better choice when hyperlapsing with a tripod. As we allow us more time anyway, we can even use the digital zoom inside the liveview, with which we can adjust and focus even more precisely. Strictly speaking, an exact aiming point is easier to reproduce, when using the liveview, than by looking through the viewfinder, as the position of your eye, and thereby the angle of view, related to the measuring field and thus also to the target point varies each time. As a result, the hyperlapse becomes more shaky, which necessitates more stabilization, and this means a reduction of our resolution. With a little bit of training, you will achieve even shorter intervals, and longer distances, so that you can consider the use of intervalometers with fixed intervals. “click” Now everything will be stabilized! For that we use LRTimelapse and Lightroom. To be on the safe side, I created 3 similar hyperlapses of the same object. For the demsontration, we will take the shortest one with 90 photos. First we open Lightroom and open up the library, then we drag and drop the first picture of our sequence folder into the library-window. when all pitures are loaded, we press “develope” Now, we will have a look at the pictures, starting with the first… …and the last one! Both of them have to be marked with a “4 star rating” We start with the postproduction of the last picture, as it is obvious, what has to be done here. (brightness) Now we switch over to the first one, and do the same here. The adjustments can be done however you prefer it. Depending on what kind of gradiant you wnat to add into the sequence, you will adjust the first picture depending relatively to the last one. Well, when the editing is done, and everything pleases us, we select all pictures, press the right mouse button, and go to :>”metadata”>”safe metadata to files” Now you can see, how the XMP files are written into the source folder. In the next step, we will generate a gradient from the first til the 90th picture with LRTimelapse This is done by pressing “auto transition” and “safe” and after that clicking on “visual preview” Hereby it is absolutely important to understand that, every change which is applied on the deflicker-settings reqieres re-calculation of the transition from scratch and thus also the visual preview must be recreated by the program As you can see here, I experimented a little bit, hung on the changes, until I finally liked them. In order that the fine changes have an impact after all, we have to always press “safe” Otherwise it won´t work Well, I am pretty satisfied with this sequence I will press “safe” once again just to be sure.. In Lightroom again, We retrieve the changed metadata, by pressing rightclick with all photos selected and then: “Metadata”->”Read Metadata from Files” We can wittness that the previews are changing You can still make some small changes by all means, however, you must make sure, that when synchronising, you only select parameters which are not part of the gradient anymore The small changes of the luminance, haven´t been part of the gradient, that´s why they have no impact on it. We save all metadata once again and export directly. Rather than jpeg, one could also let After Effets read the .Raw Files, but the workflow would be way too slow. Here you can see, lightroom writing the jpegs into the folder. Now we open up After Effects and import the jpegs. In the best case, the program (After Effects) and the pictures are both stored on an SSD Drive. In this case on the desktop. Once we got the folder, we select the first picture, making sure, that the the checkbox “importer jpeg sequence” is checked. Now, we will drag and drop the sequence from the project window into the timeline (with the left mouse button). Inside the project settings, ->”displayformat and timedesignation” I change the format, so that the actual frames are being displayed. I copy the name of the sequence and save the whole After Effects Project file using the same name. Now we look at the sequence as a whole, and hope that there were no hidden double pictures. The preview resolution, and also the processing resolution should be set to “full”, so that we can work accurately. Then it is always worth it to test, whether the warpstabilizer can take away a huge part of our manual work. We click on “advanced” ->”detailed analysis”. At “frame” we press “stabilize only” and at “method” we choose “position/rotation/scale”. For the smoothness I often choose 1 % , it depends… The whole thing now takes time until everything is analized and stabilized. Let´s have a look how the warpstabilizer performed. Not bad. Nethertheless you can see, that there are still some minor shakes in there. In order to apply the warpstabilizer a second time on top of the first one, we will need to create a second-, a “subcomposition”. And to avoid the edges to be cut off, I enlarged the frame in “composition” ->”compostion settings”. Now we hold with the left mouse button… ah! First rename, important! This is the warped sequence. And in order to distinguish both of them, I rename them beforehand. “warped only” good. So now, we will take the “warped only” and drag and drop it with the left mousebutton held into this tiny “movie-symbol” This automatically renamed it to “warped only 2”. Now we can apply the warpstabilizer a second time on top of the already warpstabilized composition. What I noticed often, is that the sharpness suffers a little bit, the more often we create copies of a composition. That means, every further copy, on which we apply a new warpstabilizer instance, will reduce our original sharpness a little bit more. That´s why I compare both areas with each other. But I think, if you only create one copy, the difference in sharpness, isn´t too notable yet, as in contrast to a fifth copy however. Anyway, here we can´t really spot huge differences, if at all. But should you really repeat this step 5 times, we would surely see a huge reduction in sharpness! Well, the second warpstabilizer instance is finished. Now we add lines by clicking “view” and I will change the colour of the lines. This can be done in: “Edit” ->”presettings” ->”Grid and helping lines”. Here at “colour” just choose red for example. OK. And look what the warpstabilizer did with the composition. Well. There is a difference. It already looks quite good, no question. The second attempt improved it a little more. But the pixel-exact precision, which can be achieved by manual tracking, the warpstabilizer is just not capabale of. That´s why I´ve created a new composition out of the original shaky one. That´s the original sequence once again, and it will be tracked completely. But before we will edit the “warped only 2” with an other method. Namely with the transformation tools. Because often the warpsatbilizer does a decent job except for a few parts of the sequence, so you really don´t have to work yourself through the whole thing from the start again. Then you can just stabilize these few frames with the transformation tools. With this half automatic, half manual method, we can get really good results. The Warpstabilizer makes everything really nice and smooth, but it does not fixate the object to one pixel. So we can help it along by using “position” “rotation” and “scale” to rework single frames. Should this take too long, you can consider, to directly track the native sequence with the manual tracker. Sometimes it really makes more sense to start manually in the first place, rather than painstakingly trying to rescue, what the warpstabilzer fabricated. Now we move over to the original unstabilized file which I just renamed to “tracked” as I want to track it now. I drag and drop the helping lines into the composition window, and begin the tracking. For that we fetch a second layer display, and by this, we can see which impact the changes have on the sequence. Here at the “tracking window” we click on “stabilize motion”, and drag the trackpoint 1 to a suitable position, preferably onto a contrast rich spot of the object. To enhance the chance of the tracker to catch the same spot again, we enlarge the inner and outer frame of the trackpoint 1, but this extends the processing period each time. Then there are the tracking options, for example, you can set the tracker to use the rgb channel or the saturation. For example you can also set the tracker, so that it stops, if the percentage of trust of precision will go below a certain value. You really can experiment around and try different adjustments here. I now set the tracker, so that it stops, if the trust of precision goes below 90%. What I have noticed, is that if we analyse forward frame by frame, the positioning happens way more precise, than if we let the tracker analyse it all at once. If the tracker simply does not get the right spot, and you realize that it makes no sense to continue like this, you should try an other point, and test, if it gets picked by the tracker more reliably. If this also does not work, there is a trick which I will show you now. For that, we delete the whole tracking, and begin from scratch once again. Click on “stabilize motion”, and DON´T use the little cross in the middle, but instead, we use the trackpoint itself as a kind of geometrical template. We do not dare to click any of the “analyse forward” or whatsoever… …but instead go frame by frame and, simply use the inner and outer frame of the trackpoint as template. As these are squares, which are totally symetrical, the point in the middle will also remain in the middle. As long as we avoid to EVER click on “analyse one frame forward”. We stay away from this tool here (tracker window) and instead use the squares “misused” as an aligning template. Here it is important to activate “feature center” “trust” and “attach point” in the transformation tools. Otherwise you would track nothing. You would track, track and track, until you are done with the whole sqequence, and then when you want to see what happened, nothing would have happened. You would have spent half an hour, aligning the squares meticulously…but for nothing! So be aware, “feature center” , “trust” and “attach point” must be activated, in order to manually set the square frame by frame. You can do this for a little while, until you realize, that you have to do this for a second time round, for the trackpoint two, which is responsible for the rotation. Put it on a suitable point, which is visible in every frame. In this case I will also avoid to let the tracker search for a contrast point, but predefine the middle of the building in terms of perspective. Even when it spins further, it will constantly remain as the middle of the square over the middle of the building. I will speed this up a little bit. Well, now we will check the intermediate result. press “apply” Unfortunately I am not completely happy. I liked the warped one even more, that´s why I made a copy out of the first warped one, and now rename it as “warped”, and as I will track it immediately, I will call it “warped & tracked”. And with this “warped and tracked” the same thing as before, but now from the end to the start, as at the end, there where the most visible shakes. Here at the white satellite dish I will attach the trackpoint, and work through it backwards step by step. We let the tracker analyze half automatically backwards frame by frame, as this is more precise. And if the tracker does not hit the pixel-exact middle of the satellite dish, we only need to help it just a tiny bit each time, as the warpstabilizer accomplished a lot pre-stabilization. And thus the parts, which have to be tracked, are already closer to each other. They are not a few hundred pixels away from each other anymore, but rather a few dozens. This way the tracker must not search as much. Well, Because I like this sequence the most, I will rename it to “Tracked – BEST” This way I know, ok, from all attempts, this one worked out best. Nethertheless I will stabilize a litlle bit finer right here. And for this I will drag the ankerpoint of the composition to the middle of the satellite dish. adjusting the rendersettings. In this case as jpeg sequence, as I have a plan with this sequence. Once we render it, it´s finished. Here in stabilized condition. Possibly further correctable. Here the original version. And here both of them in direct comparison. And for the completeness, I will show you the two other hyperlapse versions, I shot from the same building for the tutorial. WelI hereby I say good bye, I hope you liked it, see you next time, good bye!