Today’s guest tutorial comes to you courtesy of Matt Webster, a.k.a. “HoraceBury.” Matt is a Development Manager working in central London. He has 15 years of experience building enterprise sites in .NET and Java, but he prefers Corona to build games and physics-based apps. As a Corona Labs Ambassador, Matt has organized two London meet-ups and looks forward to doing more in 2013. Matt’s first game in Corona was Tiltopolis, a hybrid twist on the classics Columns and Tetris.

Preface

First, please download the project files so you can follow along with the code samples. Each of the “sampleX” modules are functioning mini-projects and should be worked through one at a time within main.lua. Uncomment only one require statement at a time to follow the workings in the logic. The last module, sample11.lua, is the entire pinch-zoom-rotate module which you can incorporate into your own project. Enjoy!

Introduction

Most applications (more than you’d expect) can perform perfectly fine with just one touch point. If you consider the large number of apps out there, you can see that many have a huge feature set but still get by with just a single point of input because they are designed around buttons or individual swipe actions, etc.

Take “Angry Birds,” for example. This game requires that every tap, drag, and swipe is performed by one finger. Navigating the menu, opening up settings, and firing the afore-mentioned birds with attitude is all done with one finger, and rightly so. It makes for a simple, intuitive and engrossing game. However, even this most basic interface requires one simple trick learned from iOS. This involves using two fingers to “pinch” zoom in and out of the parallax-scrolling action.

So, that’s simple, isn’t it? The rule is: when one finger is used, perform the action for the object being touched. When two fingers are used, perform a gentle scaling of the top-level parent display group.

This tutorial aims to show you how to handle these multitouch scenarios with as little hassle as possible. It will also try to provide some insight into the oft-requested pinch zoom.

Touch Basics

If you’re reading this tutorial, you probably already have some experience with the Corona touch model, so I will just highlight the core tenets.

  • addEventListener() is used to listen to a particular display object for user touches.
  • There are two types for touch events: touch and tap.
  • The touch event is comprised of phases: began, moved and ended.
  • Listening to one display object for both touch and tap events will fire the touch event phases before the tap event fires.
  • Returning true from an event function stops Corona from passing that event to any display objects beneath the object.
  • system.activate(“multitouch”) enables multitouch.
  • Once a touch event has begun, future touch phases are directed to the same listener by calling display.getCurrentStage():setFocus().
  • setFocus can only be called once per object per event (without cancellation).
  • Calling dispatchEvent() on display objects fires artificial events.
  • Events fired with dispatchEvent do not propagate down the display hierarchy.

The Tap Problem

As described above, touch events have a number of phases which literally describe the users interaction with the device: putting the finger on the screen, moving it around, and letting go.

When it is listened for, the normal tap event is fired if the above event phases occur within a given time span — iOS employs about 350 milliseconds — and with a distance of less than ~10 pixels between the began and ended locations.

This means that if you are listening for both touch and tap events you need to actually detect a tap within your touch listener function to know that your tap listener function is going to be called. So, if you’re already detecting taps you may as well not attach a tap listener at all. For the purposes of this tutorial that’s exactly what we’ll do: we will leave out tap events because they simply complicate our code.

Single Touch

To demonstrate the typical touch event, let’s create a display object with a standard touch listener and use it to move the display object around.

The above function handles touch events when multitouch is not activated. This isn’t the “simplest” touch listener, but it’s practical and safe. It’s also not the most complex that we could build, but any other work it can perform should be done by functions it can call. It caters for the following situations:

  • The touch starts on the object.
  • The touch is used to move the object.
  • Touches which start “off” the object are ignored.
  • Handled touches do not get passed to other display objects.
  • Ignored touches get propagated to other display objects.
  • The display object has its own :touch(e) function, not a global function.

Note that the object will ignore touches which start elsewhere. This is because setting hasFocus indicates that the object should accept touch phases after began. Also, it will not lose the touch once it acquires it because setFocus tells Corona to direct all further input to this object.

Multiple Touches

Fortunately, converting this function to be used by multiple display objects is not difficult. The catch with setFocus is that each display object can only listen for one touch because all other touch events are ignored on that object after it begins handling a touch.

To demonstrate multitouch we will convert the above code to create multiple objects which will handle one touch each.

Note the key differences in this code:

  • We have activated multitouch.
  • We have wrapped the display object creation so that it can be called repeatedly.
  • setFocus accepts a specific touch ID to differentiate between user screen contacts.
  • When ending the touch, setFocus accepts nil to release the object’s touch input.

With the code above, we should be able to create 5 large circles, each of which can be moved independently. Note that, as before, due to setting hasFocus, and with setFocus now accepting a specific touch ID, the display objects will ignore touches which start elsewhere and they will not lose a touch once it begins.

The Multitouch Problem

Remember that the strength of the code above is that it can distinguish between multiple touches easily. This is because objects will not lose their touch once they acquire it. This is both a huge bonus and a bit of a problem.

  • The bonus is that setFocus allows us to say, “Send every move this user’s touch makes to my object’s event listener and nowhere else.”
  • The slight problem is that setFocus also stops our display object from receiving any other touch events.

If we have not yet called setFocus, using hasFocus conveniently allows our object to ignore touches which don’t begin there. This is useful because users often make a swiping gesture (by accident) on the background or inactive part of the screen and swipe across our object. We want it to ignore touches which don’t begin on it. So, the question is “how do we convince Corona to let our objects receive multiple touches?” when the functions which give us this great ease-of-use stop exactly that? The answer is to create a tracking object in the began phase.

The Concept

With a small change to the code above, we can create a single object which spawns multiple objects in its began phase. These objects will then track each touch individually. We will also change the code further to remove the tracking object when the touch ends. The complete code will have one function to listen for the touch event began phase and another to listen for moved, ended and cancelled phases. These two functions will be added to the target listening object and the tracking dot objects, respectively.

Spawning Tracking Dots

First, we need to create an object which will handle the began phase as before, but this time it will call a function to create a tracking dot.

This is pretty straightforward. It just creates a display object which listens for the began phase of any unhandled touch events. When it receives a touch with a began phase, it calls the function which will create a new display object. This new object will be able to track the touch by directing the future touch phases to itself (instead of “rect”) by calling setFocus. Note that we are not setting the hasFocus value because multitouch objects only need to handle the began phase.

Next, we need to create the tracking dot. This code is almost identical to the previous multitouch function.

Note that the only two changes we’ve made to this function are:

  • We call circle:touch(e) because the circle has only been created and has not actually received the touch event’s began phase. Calling this allows the circle object to take control of the touch event away from the “rect” object and handle all future touch phases.
  • At the start of the :touch() function we also change to using the circle as the target because the e.target property is actually the “rect” object (where the touch began).

When this code is used with the code above we will see a small blue rectangle which can create multiple white circles. Each circle is moved by an independent touch. It is this mechanism which we can use to direct all of the touch information to our blue “rect” and pretend that it is receiving multitouch input.

Faking Multitouch Input

Our blue “rect” object is going to become the recipient of multiple touch inputs. To do this we need to first modify its touch listener function. At first we will simply add some print() statements for the moved, ended and cancelled phases. Here is the modified :touch() listener function for the small blue rectangle:

The major change here is the addition of the moved, ended and cancelled phases. Doing this allows the tracking dots to call the :touch() function of the blue rectangle, passing in the event parameter received by the white circle’s touch function.

The elseif statement is also important here — if the tracking dots pass the event parameter to the rectangle, the e.target will be a reference to the dot, not the rectangle. We will store the reference to the rectangle in the .parent property. This way, the rect:touch() function can determine if it is the rightful recipient of the touch event. Of course, we haven’t changed the circle’s touch function to call the rectangle’s :touch() yet. Before we do that, we need to make sure that each circle keeps a reference to the rectangle object so that it can call the rect:touch() function and pass it the event parameter.

Here is the start of the newTrackDot() function, which needs to make a local copy of the original .target property of the event parameter.

Keeping a reference to the object which received the original began event phase allows our tracking dots to send the multitouch events back to it. Now, we don’t need our tracking dots to send the began phase event parameter to the “rect” because it has already received that event. What we do need is to call rect:touch(e) in the :touch() function of the tracking dot so that the other phases get sent to our “rect” object.

Pretty simple. We now have a rectangle which creates a tracking dot for each touch it detects. Each of those dots also send their touch information back to the rectangle, using its original touch handler function. The rectangle will also know that it is the proper target.

The trick now is to make use of this multitouch information!

Employing Multitouch

We now have an object which can detect the start of multiple touch points. It spawns tracking dots for each point and receives touch events.

To make some basic use of this multitouch information we will position the “rect” display object at the centre of the touch points. This can all happen within the :touch() function of the rectangle object. To position the “rect” object at the centre of our multiple touch points we first need to find the average x and y of all the touch points. We’ll use a separate function for that.

In order to call this function, “rect” needs to keep a list of the tracking dots it creates. We will add this list to the rectangle as a property, directly after we create it.

Now we’ll get the average centre of those dots and update the x and y position of “rect”:

Run this code and you’ll see a small blue rectangle. Touch the rectangle and it produces a white circle. Moving this first circle will cause the blue rectangle to follow it precisely. Release the touch and create another white circle and you’ll see that the blue rectangle now stays at the midpoint between the two white circles. Create yet another and it will stay between the three circles, and so on.

Debugging and Devices

We now have a good Simulator debugger for multitouch capable display objects. You’ll notice, however, that when you release your touch from one of the tracking dots, the dot does not disappear. This is really great for debugging with the Simulator because you can pretend to have multiple touch points. This is not so great on the device because you’re filling up the screen with white circles.

To fix this, if it’s running on a physical device, the rect:touch() function needs to remove the tracking dots in the ended phase. First, however, we need to store a variable at the start of our code which indicates whether we are running on a device.

The isDevice variable will be true if the code is running on a real, physical device and it can be used to automatically remove the tracking dot when the user lifts their finger.

Notice that or e.numTaps == 2 is used. This allows the tracking dot to have a tap listener which also calls the rect:touch() function so that in the Simulator we can use a double tap to remove the tracking dot.

The tap listener should only listen for taps if the code is running in the Simulator, so we’ll use the isDevice variable again. The tap listener is added inside the newTrackDot() function which creates tracking dots.

Note that we also:

  • Check for two taps, so that only a double tap will remove a tracking dot.
  • Set the .parent property, just as we do in the touch function.
  • Only attach the tap listener if the code is running on the Simulator.

Making it Useful

The code so far is useful but doesn’t do very much. We can move a small, blue rectangle around with more than one finger. The beauty of multitouch input devices is that the real world has an impact on the virtual. If all we want to do is move an image or collection of display objects around we can add this code to those objects and have them respond to the user’s touch. If we want it to be a bit more realistic, we should add some rotation and scaling.

Relative Motion

Before we do that, however, take a look at how the rectangle moves when you use one finger. It centres itself directly under the touch point. To be more believable, it should really move relative to the motion of the touch point. Unfortunately, this is not as simple a change as it would appear, because we need to cater for removing a touch point. We now need to move some code into the moved and ended phases.

To illustrate the complete change and to lay out the full rect:touch(e): code — it has changed a lot, after all — here’s the whole function:

The fairly significant change here is to:

  • Calculate the centre of all touches and store it for reference in the began phase.
  • Add the difference between the previous and current touch centres to the rect.x and rect.y in the moved phase.
  • Update the stored touches centre in the ended phase so that removing a finger does not throw off the next moved phase.

The user can now place any number of fingers on the rectangle, even change them, and move it around as if shifting a photo on a table. Of course, what it doesn’t do (yet) is rotate with their touch.

Scaling

With multitouch control of a display object, each transformation we want to apply to it requires taking the average of all the tracking dots and applying that to the image at the midpoint (the average location) of the display object.

For scaling, this means that the mathematical process is:

  • Sum the distances between the midpoint and the tracking dots.
  • Get the average distance by dividing the sum distance by the number of dots.
  • Get the same average distance for the previous location of the tracking dots.
  • Take the difference between the previous and the current average distance.
  • Apply the difference as a multiplication to the display object’s .xScale and .yScale.

This is only slightly more advanced compared to how we applied the average transition of the display object when moving multiple tracking dots. To help us get these scaling values we’ll need some basic library functions. The following function calculates the distance between two points on the screen. This is a very typical trigonometry function and widely used.

To get the midpoint of the tracking dots we’ll use the calcAvgCentre() function above. To get and store the average distance between the midpoint and the tracking dots we’ll use these functions. The first of these gets the current distance for each dot, stores it in the tracking dot and also saves the previously known distance. The second function calculates the difference between the previous and current set of distances.

Using these functions is simple. For the began and ended phases of the rect:touch() we just call them and they update our tracking dots with the appropriate values. Here is the additional update call for the began and ended phases:

The moved phase is a little more complex because this is where the real work is done. Fortunately, all we need to do here is update the tracking dots again and only apply the scaling if there is more than one tracking dot.

Above, we’ve made the following changes to the moved phase:

  • Declared variables to work with the forthcoming transformation values.
  • Called updateTracking to refresh the stored distance values of the tracking dots.
  • Used those distance values to calculate the average change in tracking scaling.
  • Applied that scaling to the display object “rect”.

The display object now translates (moves) and scales (zooms) along with our tracking dots (touch points).

Rotation

To rotate our display object, the basic logic follows that we work out how much each tracking dot has rotated around the midpoint (of all the tracking dots), get the average, and add the difference between that and the previous amount to our object’s .rotation value. This requires adding some more general math functions to our code.

Because of an oddity in angle calculations, we will also need a function which can determine the smallest angle between two points on the perimeter of a circle. This is important because when we’re using the angle for which a tracking dot has rotated, and we may accidentally end up with an angle that represents the larger angle, say, between 10 degrees and 260 degrees. What we need is the angle 90 degrees, not 260 degrees.

As in the calcAvgScaling function, we’ll make use of the above function in the calcAvgRotation function to determine the average amount that all of the tracking dots have rotated around the midpoint. We also want to update the difference between the tracking dot angles — and their previous angles at the same time. Fortunately, we’re already doing this for tracking dot distances from the midpoint, so we can add code for that as well.

Now, due to this small addition of code, the rect:touch() function is already updating the appropriate values in the began and ended phases. All we have to do is apply rotation to the “rect” display object in the moved phase. Of course, we only need to do this if there is more than one tracking dot. So, we simply call the functions described earlier to calculate the average amount of rotation around the tracking dots’ midpoint and apply it to the display object.

Pinch Centre Translation

Run the code now and you’ll notice that while the display object rotates, scales and moves with the tracking dots, it doesn’t quite shift with the tracking dots. This is because, unless the user is very lucky (or not paying attention), they will never quite get the midpoint to be the very centre of the display object being manipulated.

To solve this, we can’t just heed the basic translation, scaling, and rotation to the display object — we also need to apply it to the centre point location of the display object. This means that:

  • Scaling should be applied to the distance between the midpoint and the “rect” centre.
  • Rotation should be applied to the “rect” centre, rotated around the tracking dot midpoint.
  • But, fortunately, we’re already applying translation, so that can be ignored.

Ok, so what standard library maths functions do we need? Well, we want to rotate a point around another point, so we need the following math helpers. Additionally, the moved phase also needs some new additions.

The moved phase is now doing a number of things, whether there’s one tracking dot or many:

  • pt is declared to use as a working space for the display object’s position.
  • The midpoint translation is applied to the working object.
  • The distance between the midpoint and the display object centre is scaled.
  • The centre of the display object is rotated around the midpoint.

Run the code now and no matter where you place your fingers, real or virtual (in the Simulator), as long as the touch (tracking dot) is started on the display object it will pinch-zoom with the touch points.

The effect is most obvious when using two fingers because the tracking points stay precisely relative to their starting location on the display object, but more can be used and the result is the same, just a little more averaged across the touch points.

And Finally…

Everything so far has relied on a single display object being manipulated. When does that happen in the real world? Realistically, a program will need a group of objects to be pinch-zoomed. More importantly, what use is a complex function if it can’t be re-used?

To re-use the :touch() function so that it can be attached to any display object — image or group — simply change the references it uses. To show that, let’s create a display group with a number of objects contained, and attach a touch listener and function to that group!

In Summary

And there we have it: a touch listener module which can be applied to any display object or group to implement multitouch pinch-zoom-rotation.

If you didn’t already download the project from the link at top, you can get it here.

As usual, please participate in the conversation by posting your questions and comments below.

Original post: 

Implementing Pinch-Zoom-Rotate