Over the last eighteen months, stable|kernel has contributed to the phenomenon known as the Internet of Things — connecting mobile devices to physical devices.

A lot of the things you do from a mobile application are expected to be done immediately, but physical devices don’t necessarily move at the speed of touch. The user expects feedback, wants to move on and to be notified if their action does not succeed. This poses a problem.

Consider someone controlling a piece of hardware from their phone. When they increment a value one-by-one from 0 to 10, we don’t want to send a network request for 1, 2, 3 and so on. No, we’d like to send over the user’s ‘final’ value once they’ve stopped incrementing for a period of time. This is a chance for optimization, that at the scale of a large user base, will pay off in the long run. To that end, we need to do our part and limit the amount of network noise by “debouncing” our call.

In our apps, we want the UI to update immediately and the resource-intensive tasks to execute at a later time when it is reasonable to assume the user is done making changes.

As I’m drafting this post in Mou, a markdown editor, I see this behavior in action. I’m typing into a plain text editor. If I type fast enough, the marked up preview of my document doesn’t update. Once I lose my flow for even a moment, I see my changes propagate to the preview.

This has multiple benefits:

  • It lets me focus on the task at hand without having to wait for the preview to finish updating. (That takes longer to occur than me pressing the next key.)
  • It gives me the feedback in a reasonable amount of time.
  • It is more efficient with respect to resources.

This is a pleasant experience, and our solution mirrors it closely.

STKLastOutQueue

We created a utility that will only truly execute a request if we believe the user has ‘finished’ making changes.

debouncing

STKLastOutQueue.h

The interface is fairly simple:

#import <Foundation/Foundation.h>

@interface STKLastOutQueue : NSObject

+ (STKLastOutQueue *)defaultQueue;

- (void)enqueueTask:(void (^)())task
            onStart:(void (^)())onStart
             forKey:(NSString *)key;

- (void)executePendingTasks;

@end
  • task is the resource intensive block to be executed.
  • onStart is called right before the task is executed (so we know when our task actually starts).
  • key is an identifier that groups tasks together, so we can use the same queue to handle multiple tasks.

STKLastOutQueue.m

The internals of the queue look like this:

#import "STKLastOutQueue.h"

static NSString * const STKLastOutQueueTaskKey = @"STKLastOutQueueTaskKey";

@interface STKLastOutQueueElement : NSObject
@property (nonatomic, strong) void (^task)(void);
@property (nonatomic, strong) void (^onStart)(void);
@property (nonatomic, strong) NSTimer *timer;
@end

@implementation STKLastOutQueueElement
@end

@interface STKLastOutQueue ()

+ (NSTimeInterval)delay;    // delay used by timing mechanism

@property (nonatomic, strong) NSMutableDictionary *elements;

@end

We will keep a dictionary of elements where each element has the task it will perform, the code it will execute when the task finally goes through and the timer that will trigger it.

Implementing the extension and initializing data is straightforward:

@implementation STKLastOutQueue

+ (NSTimeInterval)delay
{
    return 2;
}

+ (STKLastOutQueue *)defaultQueue
{
    static STKLastOutQueue *q = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        q = [[STKLastOutQueue alloc] init];
    });

    return q;
}

- (id)init
{
    self = [super init];

    if (self) {
        _elements = [[NSMutableDictionary alloc] init];
    }

    return self;
}

Now, the fun stuff:

- (void)enqueueTask:(void (^)())task
            onStart:(void (^)())onStart
             forKey:(NSString *)key
{
    NSLog(@"Enqueue; %@", task);

    NSTimer *timer = [NSTimer scheduledTimerWithTimeInterval:[STKLastOutQueue delay]
                                                      target:self
                                                    selector:@selector(performTask:)
                                                    userInfo:@{STKLastOutQueueTaskKey : key}
                                                     repeats:NO];

    STKLastOutQueueElement *e = [[STKLastOutQueueElement alloc] init];
    [e setTimer:timer];
    [e setOnStart:onStart];
    [e setTask:task];

    [[[[self elements] objectForKey:key] timer] invalidate];
    [[self elements] setObject:e forKey:key];
}

First, this method schedules a timer with user info used to retrieve the request, then stores the timers and blocks by associating them with the key. If an element is already enqueued when a new task is enqueued, its timer will be invalidated (so it never executes) and it is replaced in our list of elements.

This gets us the desired debouncing behavior and our -performTask: method will only execute once after our specified delay.

- (void)performTask:(NSTimer *)timer
{
    NSString *key = [[timer userInfo] objectForKey:STKLastOutQueueTaskKey];
    STKLastOutQueueElement *e = [[self elements] objectForKey:key];

    void (^task)() = [e task];
    void (^onStart)() = [e onStart];

    NSLog(@"starting task %@nkey - %@", task, key);

    [[self elements] removeObjectForKey:key];

    // execute blocks

    onStart();
    task();
}

We execute the task and onStart blocks and remove all information related to this request, effectively resetting the “queue” associated with the key. Notice the order of execution in the last code block, since it is important.

We store the blocks we wish to execute in local variables, then we discard the element they belonged to before executing those blocks. The motivation here is that we do not know what the block is doing – it may very well re-enqueue an operation on this same queue with the same key! If we were to swap the order of operations here – calling the task and onStart blocks and then removing the element from the queue – we must inadvertently kill off the next task! In general, if you have a block variable that will exist after the block it points to is executed, you’ll want to follow this pattern.

Finally, we sometimes want to force the queue to execute. This is useful when an application is moving to the background or when we’re leaving the screen that is filling up the queue.

- (void)executePendingTasks
{
    NSArray *pending = [[self elements] allValues];
    [[self elements] removeAllObjects];

    for (STKLastOutQueueElement *e in pending) {
        [[e timer] invalidate];
        void (^task)() = [e task];
        void (^onStart)() = [e onStart];

        NSLog(@"starting task %@", task);

        onStart();
        task();
    }
}

This executes the last request for each key, resetting each ‘queue’ associated with the key. Again, notice we clear out the dictionary of elements prior to executing each block.

STKLastQueue in action

- (IBAction)stepperValueChanged:(UIStepper *)stepper {
    //update UI
    [[self countLabel] setText:[NSString stringWithFormat:@"%d", (int)[stepper value]]];

    [[STKLastOutQueue defaultQueue] enqueueTask:^{
        // resource intensive task
    } onStart:^{
        // potential UI updates indicating the task has started
    }
    forKey:@"stepper-related-task"];
}

An alternative approach

Creating a class for this behavior may not fit your needs. The benefits are:

  • Reusability
  • The implementation can change underneath

We achieved our last out queue behavior using NSTimer. A valuable alternative would be to use Grand Central Dispatch‘s dispatch_after() function.

- (IBAction)stepperValueChanged:(UIStepper *)stepper
    // update UI
    [[self label] setText:[NSString stringWithFormat:@"%f", [sender value]];

    // enqueue task
    static void (^taskPtr)(void) = nil;
    void (^task)(void) = ^{ //resource intensive task
    };

    taskPtr = task;
    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2 * NSEC_PER_SEC)),       dispatch_get_main_queue(), ^{
        void (^t)() = task;
        if(t == taskPtr) {
            taskPtr = nil;  // remove persistent reference to block before executing
            [self onStartMethod];    //analogous to on start
            task();
        }
    });

Every time the stepper is pressed, the UI is updated, a new task is created and a static pointer is set to the last task requested. Two seconds later, the interior of the block compares its captured block to the static pointer taskPtr. If no further requests were enqueued, the pointer matches the captured block. If more requests were inserted, the interior of the block will execute for each block, but the conditional will only be satisfied when the pending task in the block matches the last request enqueued.

Again, we remove the persistent reference to the block before executing it.

Decisions, decisions, decisions

Some upsides to using this approach:

  • It can be easily modified to work on a background queue.
  • You could create a code snippet to efficiently add this behavior in line, as needed.
  • It aligns with a more functional style of programming.

Some downsides to using this approach:

  • There is a little more overhead because you retain all requests for 2 seconds; however, we feel this performance difference is negligible.
  • Your code has the potential to become littered with very similar code all over the place.

Remember, we ran into the problem where we wanted to optimize successive related actions. If you ever find yourself in the same boat, you have a good starting place. Trying to decide between one of the two solutions presented here, and perhaps uncovering a better fitting solution in the process.

Jesse Black

Software Architect at Stable Kernel

Leave a Reply

Your email address will not be published. Required fields are marked *