Think And Build https://www.thinkandbuild.it Unreal Engine, iOS and stuff about coding Mon, 19 Feb 2024 22:47:32 +0000 en-GB hourly 1 https://wordpress.org/?v=5.2.21 Unreal Engine: Custom EQS Generators https://www.thinkandbuild.it/unreal-engine-custom-eqs-generators/ https://www.thinkandbuild.it/unreal-engine-custom-eqs-generators/#respond Mon, 03 Jun 2019 11:08:02 +0000 https://www.thinkandbuild.it/?p=1606

EQS in UE4 comes with a good set of generators of query items but there might be cases where you prefer to create generators tailored on your needs.
I decided to write my own generator because I had to write a query to find the best position around the querier but not too close to it. I knew that I could just add a distance tests to increment score with distance but I didn’t even want to consider items within a certain range from the querier so I ended up with a custom generator that produces a hole around the querier location. Here is a comparison between the Simple Grid Genrator available in the UE4 EQS system and my generator.

As you can see the Simple Grid Generator produces a square. While our Simple Grid Offset Generator produces a square with a hole.

LET’S CODE

Instead of subclassing the Simple Grid Generator I decided to start from a basic subclass of UEnvQueryGenerator_ProjectedPoints(the base type for all the other shapes generators). I found extremely useful to use Unreal Engine code as starting point for my implementation and I’ve mainly implemented my generator following the original code of the Simple Grid Generator, adding the logic to create the hole in the grid.

Let’s go through each needed step to create this class.

GENERATOR PARAMETERS

Here are the parameters needed to define the generator:
OffsetSpace: is the space that defines the size of the hole.
GridHalfSize: is the size of the grid.
SpaceBetween: is the space that defines the distance between each item.

The data type for these values is FAIDataProviderFloatValue. This is a special data type that inherits from FAIDataProviderValue. This data type wraps a value (int, float or bool) around a struct adding logic for data binding.
In a few world you will be able to edit this parameter from another resource and update generator items at run time. More info on this topic at this link.
and a blueprint example here.

In case you are not interested in this option you can use a simple float.
Here is the code of the header that defines these parameters.

class AMyController : public AAIController{
UPROPERTY(EditDefaultsOnly, Category = “Grid Parameters”)
FAIDataProviderFloatValue OffsetSpace;

UPROPERTY(EditDefaultsOnly, Category = “Grid Parameters”)
FAIDataProviderFloatValue GridHalfSize;

UPROPERTY(EditDefaultsOnly, Category = “Grid Parameters”)
FAIDataProviderFloatValue SpaceBetween;
}

Directly from the UE4 documentation:
“…a Generator such as the Simple Grid Generator can use a Context that returns multiple locations or Actors. This will create a Simple Grid, of the defined size and density, at the location of each Context.”

Obviously we don’t want to lose the ability to define custom context that are not just the querier. So let’s add a new parameter of type subclassof UEnvQueryContext. It will come in handy soon when we will generate the items.


UPROPERTY(EditDefaultsOnly, Category = Generator)
TSubclassOf GenerateAround;

GENERATING ITEMS

The main function responsible to create the items for our generator is GenerateItems, and it is defined in the UEnvQueryGenerator class.
We will override it adding our custom code.

The first thing to do here is to bind generator parameters to the query instance to be able to use data binding on this generator:

UObject* BindOwner = QueryInstance.Owner.Get();
GridHalfSize.BindData(BindOwner, QueryInstance.QueryID);
SpaceBetween.BindData(BindOwner, QueryInstance.QueryID);
OffsetSpace.BindData(BindOwner, QueryInstance.QueryID);

then we can grab the current value for this parameter (be sure to use the GetValue function instead of trying to access the value directly):

float RadiusValue = GridHalfSize.GetValue();
float DensityValue = SpaceBetween.GetValue();
float OffsetValue = OffsetSpace.GetValue();

With the next part of code we will finally create the query items, following this 3 points :
1 – calculate the total items number (taking into account the possibility of multiple contexts)
2 – calculate each item position
3 – Project all points and remove those outside the current navmesh and store the result.

The code is quite trivial, here you can find these three steps explained with comments:

// Get number of items per row and calculate the indexes ranges for the hole
const int32 ItemsCount = FPlatformMath::TruncToInt((RadiusValue * 2.0 / DensityValue) + 1);
const int32 ItemsCountHalf = ItemsCount / 2;
const int32 LeftRangeIndex = ItemsCountHalf - FPlatformMath::TruncToInt(OffsetValue / DensityValue) - 1;
const int32 RightRangeIndex = ItemsCountHalf + FPlatformMath::TruncToInt(OffsetValue / DensityValue) + 1;
const int32 OffsetItemsCount = FPlatformMath::TruncToInt((ItemsCount * 2.0 / DensityValue) + 1);

// Get locations for each context (we might have more that one context)
TArray<FVector> ContextLocations;
QueryInstance.PrepareContext(GenerateAround, ContextLocations);

// Reserve the needed memory space of items for each context.
// the total items count is calculated subtracting the items located into the hole from the total list of items. 
TArray<FNavLocation> GridPoints;
GridPoints.Reserve(((ItemsCount * ItemsCount) - (OffsetItemsCount * OffsetItemsCount)) * ContextLocations.Num());
// Calculate position of each item
for (int32 ContextIndex = 0; ContextIndex < ContextLocations.Num(); ContextIndex++) {
	for (int32 IndexX = 0; IndexX < ItemsCount; ++IndexX)
	{
		for (int32 IndexY = 0; IndexY < ItemsCount; ++IndexY)
		{
			// it the item is inside the hole ranges, just skip it.
			if ((IndexY > LeftRangeIndex && IndexY < RightRangeIndex) && (IndexX > LeftRangeIndex && IndexX < RightRangeIndex)) {
				continue;
			}
			// starting from the context location, define the location of the current item 
			// and add it to the gridPoints array.
			else {
				const FNavLocation TestPoint = FNavLocation(ContextLocations[ContextIndex] - FVector(DensityValue * (IndexX - ItemsCountHalf), DensityValue * (IndexY - ItemsCountHalf), 0));
				GridPoints.Add(TestPoint);
			}
		}
	}
}
// Project all the points, remove those outside the current navmesh and store the result.
ProjectAndFilterNavPoints(GridPoints, QueryInstance);
StoreNavPoints(GridPoints, QueryInstance);

GENERATOR TEXTUAL DESCRIPTION

The final touch is given by the functions GetDescriptionTitle and GetDescriptionDetails. They will just add a textual description directly visible in the EQS editor. The description and title will change depending on the value selected by the developer in the editor.

I’m taking the functions as is from simple grid generator adding the offset information.

FText UEnvQueryGenerator_GridOffset::GetDescriptionTitle() const
{
	return FText::Format(LOCTEXT("GridOffsetDescriptionGenerateAroundContext", "{0}: generate around {1}"),
		Super::GetDescriptionTitle(), UEnvQueryTypes::DescribeContext(GenerateAround));
};

FText UEnvQueryGenerator_GridOffset::GetDescriptionDetails() const
{
	FText Desc = FText::Format(LOCTEXT("GridOffseDescription", "radius: {0}, space between: {1}, offset:{2}"),
		FText::FromString(GridHalfSize.ToString()), FText::FromString(SpaceBetween.ToString()), FText::FromString(OffsetSpace.ToString()));

	FText ProjDesc = ProjectionData.ToText(FEnvTraceData::Brief);
	if (!ProjDesc.IsEmpty())
	{
		FFormatNamedArguments ProjArgs;
		ProjArgs.Add(TEXT("Description"), Desc);
		ProjArgs.Add(TEXT("ProjectionDescription"), ProjDesc);
		Desc = FText::Format(LOCTEXT("GridOffsetDescriptionWithProjection", "{Description}, {ProjectionDescription}"), ProjArgs);
	}

	return Desc;
}

USING THE GENERATOR

If you open the EQS editor you will see that our new generator is available in the generators list and you can use it exactly as the other official generators.




You can find the full code for this tutorial on GitHub.
Feel free to poke me on Twitter or write a comment here.
Ciao!

]]>
https://www.thinkandbuild.it/unreal-engine-custom-eqs-generators/feed/ 0
Unreal Engine: Environment Query System (EQS) in C++ https://www.thinkandbuild.it/environment-query-system-in-c/ https://www.thinkandbuild.it/environment-query-system-in-c/#respond Tue, 21 May 2019 16:43:53 +0000 https://www.thinkandbuild.it/?p=1574

I’m still working at the AI system for The Mirror’s End and I decided to move the entire AI core from Behavior Tree to Utility AI. Environment Query System (EQS) is very well integrated with Behavior Trees and I really didn’t want to loose the ability to run EQS from my custom AI system. Luckily enough Unreal Engine lets us use EQS outside Behavior Trees in a very easy way. With this quick tutorial I’d like to show you how to do it with C++.

LET’S CODE

The data types we are interested in are UEnvQuery and FEnvQueryRequest.
The UEnvQuery class inherits from UDataAsset that, as the UE4 documentation states, is the “base class for a simple asset containing data”.
The easiest way to setup this data is to keep a reference to an UEnvQuery in your Controller.

class AMyController : public AAIController{
.
.
.
    UPROPERTY(EditAnywhere, Category = “AI”)
    UEnvQuery *FindHidingSpotEQS;
.
.
}

And fill it from the editor with an EQS asset previously created:

The FEnvQueryRequest is a struct that works as wrapper to allow query execution and also to pass it information (query parameters).

You will call it directly from your class implementation (I actually prefer to keep a reference in my Controller to the query request , but it is not needed).

RUNNING THE QUERY

From your Controller you can simply run the query through the FEnvQueryRequest this way:

FEnvQueryRequest HidingSpotQueryRequest = FEnvQueryRequest(FindHidingSpotEQS, GetPawn());

HidingSpotQueryRequest.Execute(
        EEnvQueryRunMode:: SingleResult, 
        this,    
        &AMyController::HandleQueryResult);

As first we initialize the query request passing the query we want to run (the reference to the EQS assets previously defined in the controller header) and the request owner, that in this case is the controlled pawn.
When the request is initialized we are ready to execute it with the Execute function.
The first parameter defines how to calculate the result of the query. We obviously have the same options available from the BehaviorTree (SingleResult, RandomBest5Pct, RandomBest25Pct, AllMatching). The second parameter is the delegate object that will receive query results and the third is the delegate method to use to handle the result, this method has to implement the FQueryFinishedSignature signature.

HANDLING QUERY RESULT

The delegate function that we will designate to handle the execution of the query might be something like:

void HandleQueryResult(TSharedPtr result){
    if (result->IsSuccsessful()) {
        MoveToLocation(result->GetItemAsLocation(0));
    }
}

The query result is wrapped into a FEnvQueryResult struct handled by a shared pointer (in a few word, a smart pointer that own the object it points and delete it when no other references to the object are available).
The FEnvQueryResult struct has some handy functions to verify the query result. I’m using isSuccessfull()but you could also use isFinished() and isAborted() to verify other query states. Once you know that the query is successfully completed you can access query results using a function like GetItemAsLocation(index) that will return a single item coming from the query (In the example I’m asking for the item at index 0) or you could ask for GetAllAsLocations() function. If you prefer, and if it has sense for the query you were running, you could also get the items as Actors using GetItemAsActorand GetAllAsActors. Another interesting and useful option is to retrieve all the items scores using the GetItemScore function:

for (int i = 0; i < result->Items.Num(); i++) {
    UE_LOG(LogTemp, Warning, TEXT("Score for item %d is %f"), i, result->GetItemScore(i));
}

Depending on what you have defined as query run mode (SingleResult, RandomBest5Pct, RandomBest25Pct, AllMatching) you might get different scores but the items will be always ordered from highest to lowest score.

Here you can find the complete code example, you can also find a gist on GitHub .

Feel free to poke me on Twitter!


// AMyAIController.h ---------------------------------------------
#include "CoreMinimal.h"
#include "AIController.h"
#include "EnvironmentQuery/EnvQueryTypes.h"
#include "MyAIController.generated.h"

class UEnvQuery;

UCLASS()
class EQSTUTORIAL_API AMyAIController : public AAIController
{
	GENERATED_BODY()

	UPROPERTY(EditAnywhere, Category = "AI")
	UEnvQuery *FindHidingSpotEQS;

	UFUNCTION(BlueprintCallable)
	void FindHidingSpot();

	void MoveToQueryResult(TSharedPtr<FEnvQueryResult> result);
};

// AMyAIController.cpp ---------------------------------------------
#include "MyAIController.h"
#include "EnvironmentQuery/EnvQueryManager.h"

void AMyAIController::FindHidingSpot()
{
	FEnvQueryRequest HidingSpotQueryRequest = FEnvQueryRequest(FindHidingSpotEQS, GetPawn());
	HidingSpotQueryRequest.Execute(EEnvQueryRunMode::SingleResult, this, &AMyAIController::MoveToQueryResult);
}

void AMyAIController::MoveToQueryResult(TSharedPtr<FEnvQueryResult> result)
{
	if (result->IsSuccsessful()) {
		MoveToLocation(result->GetItemAsLocation(0));
	}
}

Ciao!

]]>
https://www.thinkandbuild.it/environment-query-system-in-c/feed/ 0
UE4 AI Perception System – with just a little bit of C++ https://www.thinkandbuild.it/ue4-ai-perception-system/ https://www.thinkandbuild.it/ue4-ai-perception-system/#respond Sat, 11 May 2019 07:11:39 +0000 https://www.thinkandbuild.it/?p=1515 In this article I’ll go down the rabbit hole, showing how to setup and use the AI perception system. The official documentation about this topic is good but I had to scrape other needed information from various forum threads, Unreal-Answers posts and a lot of try-fail attempts. This article will condense my experience and findings into a single place 🙂

QUICK INTRO ABOUT PERCEPTION AI

If this is the first time you heard about Perception AI System you might be pleased to know that UE4 has a set of ready-to-use functionalities that let you easily extend your AI with senses like sight and hearing.
You essentially have to deal with 2 components:

AIPerception Component: It defines the receiver of the perception. You normally attach it to an AIController that has to register for one or more senses. This is the actor that should “see” or “hear” someone else.

AIPerceptionStimuliSource Component: This is a component that you can attach to a Character (actually I’ve seen it working also from Controllers, but I can’t find anyone pushing into this direction) to generate a trigger for the AIPerception component. This is the actor that will be “seen” or “heard” by someone else.

For the sake of simplicity (and to say the truth, because I haven’t used the other ones yet) I’m only talking about sight and hearing but there are other stimuli that you can listen and trigger with the AI Perception system like Damage, Team, Touch and Prediction, and you could even create your custom sense.

THE LISTENER SETUP

Let’s create a simple AI that will implement the sight perception and will be able to spot enemies.
We can create a new controller that inherits from AIController (later we will create a custom C++ AIController class just to include some features not available via Blueprints). Let’s call it SightAIController.

In the SightAIController just attach the AIPerception component.

Select the new added component and from the details panel, under AI Perception, add a new sense config selecting AI Sight config and use these parameters:
Sight radius:1500
Lose Sight Radius:2000
PeripheralVisionHalfAngleDegrees: 70
Now check the 3 checkboxes under detect by affiliation.

We have just created an AI that watches for a radius of 1500 units (15 mt) with an angle of 140 (70 left 70 right) degrees to spot enemies, neutrals or friend actors. The lose sight radius is still a mystery to me… I can’t understand exactly how to use it, but in general If the AI has seen an enemy, this is how far the enemy must move away before it loses the target.

At the time I’m writing this article (Unreal 4.22) the detected by affiliation option works only partially via Blueprints. As we’ll see later we need a way to define how to differentiate enemies, friends or neutral actors and to achieve this task we will use C++.

DEBUG LISTENER SETTINGS

To verify that your implementation is correct and that the parameters are working for your level we can leverage on AI debug tools. They are extremely useful and easy to use!
– Pick the mannequin character or create a new one and select the SightAIController as its AI Controller Class.
– Place it into the level and play the level.
– Press the “apostrophe” or “single quote” key on your keyboard and you will see a bunch of data on your screen.

You can easily filter this data to include only stuff related to the AI Perception System. Pressing 0, 1, 2, 3, 4 on your numpad you can toggle information:

0 to toggle the NavMesh info, 1 is for AI, 2 is for BehaviourTree, 3 EQS and 4 is for Perception. Just disable what you don’t need and enable perception pressing 4.

You will see a green circle identifying the radius, and a violet one that identifies the lose sight radius. You should also see two green lines to highlight the 70 degrees sight angle (140 from left to right).

At this point if you play the game and you enable the debugger, running with your player in the AI perception area you should see a green sphere moving together with the player and stopping at the latest known position when you leave the sight radius area. It means that the AI has spotted you, yay!
By default Pawns are registered as stimuli sources, that is why it triggers AI sight even if we haven’t added any stimoli source to our character.

If you want to disable this behaviour just add these lines into your DefaultGame.ini configuration. The next section will show you how to enable the sense trigger only for the desired pawns through a dedicated stimuli source.

[/Script/AIModule.AISense_Sight]
bAutoRegisterAllPawnsAsSources=false

THE TRIGGER SETUP

Now let’s update the player Controller to trigger the AI sense that we have just implemented.
Attach the AIPerceptionStimuliSource component to the player character, then select the component and from the details panel, under AI Perception, add an element for Register as Source For Senses, set it to AISense_Sight and check Auto Register As Source for this character. This is enough to be sure that player will be spotted by another actor that uses AIPerception component with Sight.

FRIEND OR FOE?!

If you remember, I told you that the “detected by affiliation field” is not fully working from Blueprints. In general you haven’t got a way to setup the behaviour of one team toward another team from blueprints but we can easily handle this logic using C++.
The AAIController class implements an interface called IGenericTeamAgentInterface. This interface is responsible to provide information about team membership and the attitude of each team toward other teams. Here is where we need to write some missing pieces through C++, overriding the default behavior of this interface.

Let’s start by creating a custom AIController. Create a new C++ class that inherits from AIController and call it SightAIController.
Now what we want to do is to specify the controller team ID and its attitude toward other teams :
A team ID is defined by an uint8 wrapped into a struct called FGenericTeamId (by default value 255 is for “No Team’s”). While to define the attitude we can implement the IGenericTeamAgentInterface method called GetTeamAttitudeTowards returning the expected attitude, let’s see how.

In the controller header add a new public override:

#include "CoreMinimal.h"
#include "AIController.h"
#include "GenericTeamAgentInterface.h"
#include "SightAIController.generated.h"

UCLASS()
class TUTORIAL_API ASightAIController : public AAIController
{
  GENERATED_BODY()
  ASightAIController();

public:
  // Override this function 
  ETeamAttitude::Type GetTeamAttitudeTowards(const AActor& Other) const override;
};

And implement this class specifying the team in the constructor and overriding the GetTeamAttitudeTowards function this way:

#include "SightAIController.h"

ASightAIController::ASightAIController()
{
  SetGenericTeamId(FGenericTeamId(5));
}


ETeamAttitude::Type ASightAIController::GetTeamAttitudeTowards(const AActor& Other) const
{

  if (const APawn* OtherPawn = Cast<APawn>(&Other)) {

    if (const IGenericTeamAgentInterface* TeamAgent = Cast<IGenericTeamAgentInterface>(OtherPawn->GetController()))
    {
      return Super::GetTeamAttitudeTowards(*OtherPawn->GetController());
    }
  }

  return ETeamAttitude::Neutral;
}

By default the GetTeamAttitudeTowards compares the TeamIDs of the sensed actor with the teamID specified in this controller, if they are different they will be considered hostile each other.

You could also implement your custom logic returning directly an ETeamAttitude value. A couple of examples: for an AI in berserk mode, that attacks any actors it sees around you will just always return “Hostile” or to create an alliance between two specific teams you could check teams IDs and return “Neutral” or “Friendly” depending on teams IDs.

You could easily access the team ID of each team agent with GetGenericTeamId():

if (const IGenericTeamAgentInterface* TeamAgent = Cast<IGenericTeamAgentInterface>(OtherPawn->GetController()))
    {
      //Create an alliance with Team with ID 10 and set all the other teams as Hostiles:
      FGenericTeamId OtherTeamID = TeamAgent->GetGenericTeamId();
      if (OtherTeamID == 10) {
        return ETeamAttitude::Neutral;
      }
      else {
        return ETeamAttitude::Hostile;
      }
    }

We are all set with the AI controller, but some interventions are need on the Player Controller. Create a new C++ Player Controller class that inherits from the base PlayerController. We have now to implement the IGenericTeamAgentInterface for this controller too.

#include "CoreMinimal.h"
#include "GameFramework/PlayerController.h"
#include "GenericTeamAgentInterface.h"
#include "APlayerControllerTeam.generated.h"

class APlayerControllerTeam;

UCLASS()
class TUTORIAL_API APlayerControllerTeam : public APlayerController, public IGenericTeamAgentInterface
{
  GENERATED_BODY()
  
public:
  APlayerControllerTeam();

private: 
  // Implement The Generic Team Interface 
  FGenericTeamId TeamId;
  FGenericTeamId GetGenericTeamId() const;
};

And its implementation is very easy:

#include "APlayerControllerTeam.h" 

APlayerControllerTeam::APlayerControllerTeam()
{
  PrimaryActorTick.bCanEverTick = true;
  TeamId = FGenericTeamId(10);
}

FGenericTeamId APlayerControllerTeam::GetGenericTeamId() const
{
  return TeamId;
}

You should now create your blueprints inheriting from these new classes, and the detect by affiliation will work as expected.

You can check this Gist for the AI Controller code

USING THE AI PERCEPTION SYSTEM

The easiest way to go through Blueprints is to open the SightAIController blueprint (or a blueprint that inherits from this class, if you have implemented it in C++), select the AIPerception components and use one of the available events for this component. As example you could use the OnPerceptionUpdate event to get all the updates of the AI perception. You could then access the perception info through something similar to this BP (please not that we are using Get(0) because we are taking for granted the fact that only one sense, the sight, could be triggered for this example).

Another common way to use the AI Perception system in conjunction with a Behaviour Tree. In this case you could fill a blackboard item with the enemy spotted and update another blackboard item with its position.

I hope that this article will be helpful to you. Please feel free to provide your hints through the comment box below and add any other useful info for the other users (I will updated the article in case you guys spot anything wrong or that deserves more details).

Ciao!

]]>
https://www.thinkandbuild.it/ue4-ai-perception-system/feed/ 0
What’s up with ThinkAndBuild? https://www.thinkandbuild.it/whats-up-with-thinkandbuild/ https://www.thinkandbuild.it/whats-up-with-thinkandbuild/#respond Tue, 05 Mar 2019 21:21:32 +0000 http://www.thinkandbuild.it/?p=1383 Hello Guys! how are you all doing? I’ve recently received emails asking about the blog and its future, so I decided it was time to explain here what I’m working on and why I’m no longer writing on ThinkAndBuild.it.

It’s been a long time since the last article — 2 years! — and to me this has been a clear indication that I’m not interested in maintaining this blog anymore. It’s not that I’m no longer passionate about iOS in general: it is still my day job! In the meantime though a new interest has grown on me since the day I released my first video-game for iOS, Linia.

“Coding your own video game is easier than you think” they said.

With this (oh so wrong) assumption, a huge passion for video-games and a game already on the (virtual) shelves, I decided to raise the bar and aim at the PC and console worlds with a new project called The Mirror’s End. This new endeavor has been literally eating away all my free time and energy, and before I could realize it, my focus had entirely shifted to this new project, leaving ThinkAndBuild behind.

It is sad for me to write this and admit (mainly to myself) that I won’t write about iOS anymore. I still have to decide about the future of ThinkAndBuild though: maybe I could write about my new project… or I could just say goodbye to the blog with this very article.

Either way, I want to thank you all for your support. I’ve truly appreciated all your emails and your messages on social media all these years! It has been a rewarding and interesting adventure and I’m truly satisfied with the results achieved so far.

Last but not least a huge thanks goes to Nicola Armellini, a good friend who helped with his suggestions and the editing of my articles.

Ciao ciao:)
A presto

Yari
ThinkAndBuild

]]>
https://www.thinkandbuild.it/whats-up-with-thinkandbuild/feed/ 0
VIPER-S: writing your own architecture and understand its importance (part 3) https://www.thinkandbuild.it/viper-s-writing-your-own-architecture-and-understand-its-importance-part-3/ https://www.thinkandbuild.it/viper-s-writing-your-own-architecture-and-understand-its-importance-part-3/#respond Thu, 29 Jun 2017 15:32:21 +0000 http://www.thinkandbuild.it/?p=1363 It the previous two articles we saw how to setup and implement VIPER-S. In the third and last of the series we will be focusing on sharing information between modules and testing.

Sharing data between modules

Passing information between modules is a crucial task, and our architecture should be able to take care of it in a clear way.
The first flow we’ll discuss is required when a module needs to send information to the following module, like when we select an item from the ItemsList. In this case we want to show the ItemsDetails module for the selected item.

The solution that I’ve implemented for VIPER-S resides in the Navigator of the ItemsDetails module. The ItemsDetails module doesn’t mean anything without an Item: it needs an Item reference to work correctly. That being said, it’s a good idea to build the module by passing the Item directly to its makeModule() function, and storing the Item reference in the module director so that it’s available for all the other actors. Here is the function code:


  
static func makeModule(with item:Item)-> ItemsDetailsScene {

    // Crete the actors


    director.ui = scene
    director.item = item // HERE the item is stored within the director
    scene.eventsHandler = director

    return scene
}

and we can call it from the ItemsList module director, when the user selects an item.

 

    func onItemSelected(index:Int) {
        let item = items[index]
        navigator.gotoDetail(for: item)
    }

Another really common case where we need to pass information from one module to another, is when a module has to be dismissed after performing an operation that might alter the information of the presenting (previous) module. In that case the previous module has to be updated before being shown again. In our example app we encounter this flow when a new item is created by the ItemsAdd module. After the item has been created we want to update the ItemsList.

This is the flow of this map

We can use delegates or notification/observers as usual here. In this case I’ve implemented my solution using delegates. The delegate object will implement the “ItemsAdd_Delegate” protocol that requires the “itemsDidCreate” function to be implemented. Since the flow is quite common, I think that, again, it’s a good idea to build the ItemsAdd module by passing the delegate with the makeModule function directly and storing it with the director.

    
static func makeModule(with delegate:ItemsAdd_Delegate?)-> ItemsAddScene {
        
        // Create the actors
        
        let scene = instantiateController(id: "Add", storyboard: "Items", bundle:Bundle(for: self)) as! ItemsAddScene
        let navigator = ItemsAddNavigator(with: scene)
        let worker = ItemsAddWorker()
        let director = ItemsAddDirector()
        
        // Associate actors
        
        director.dataManager = worker
        director.navigator = navigator
        director.ui = scene
        director.delegate = delegate // PASS the delegate to the director
        worker.presenter = director
        scene.eventsHandler = director
        
        return scene
}

So you can easily present the ItemsAdd module from the ItemsList module like this:

 

    // ItemsAddDirector code 
    func onAddItem() {
        navigator.gotoAddItem(with: self)
    }

    // ItemsAddNavigator code 
    func gotoAddItem(with delegate:ItemsAdd_Delegate?) {
        let itemAdd = ItemsAddNavigator.makeModule(with: delegate)
        push(nextViewController: itemAdd)
    }

When the ItemsAdd module completes its operation, it calls the function itemDidCreate on the delegate that at this point can easily perform any needed update on the view.

extension ItemsAddDirector: ItemsAdd_PresentData{
    
    func presentSuccess(){
        delegate?.itemDidCreate()
        navigator.goBack()
    }

While there are certainly a number of cases we haven’t covered, I’m confident that we can easily handle them by using the solutions we just looked at as a reference.

Testing the architecture

It’s now time to talk about testing, a topic really close to our hearts!
Testing the code that we have written until now is really easy but I must admit I still have doubts about how to test some actors… Let’s start by checking tests that can be very easily integrated in our code.

The architecture is driven by protocols so it’s pretty easy to write spy objects that we can inject inside the class we’re testing to keep track of its behaviour.

Let’s start by testing the director of the ItemsList module. This class implements 2 protocols, “ItemsList_HandleUIEvents” and “ItemsList_PresentData”, that in turn are strictly related to UI and dataManager objects of the director class. Let’s check the code for the “onUIReady” function implemented in the director class to clearly understand what I mean here:

    func onUIReady() {
        ui.toggle(loading: true)
        dataManager.fetchItems()
    }

The UI and dataManager objects are just an implementation of 2 other protocols: “ItemsList_DisplayUI” and “ItemsList_ManageData”.

WRITING SPIES

A spy is a “fake” implementation of a protocol that keeps track of all the information that pass through it and it’s the technique I’m using to test all the VIPER-S modules. Depending on how deeply you want to test your classes you can write spies that tracks more or less information during tests execution. Since the code written since now is quite simple we’ll cover all the functions calls with all the received params.

The dataManager object of the director implements “ItemsList_ManageData”. Here is the code for its spy:

    class DataManagerSpy: ItemsList_ManageData{
        
        var fetched: Bool = false
        var deleted: Bool = false

        func fetchItems(){ fetched = true }
        func deleteAllItems(){ deleted = true }
    }

Easy. It just uses two variables to define if any functions has been called.

Let’s check the UI spy now, an implementation of “ItemsList_DisplayUI” protocol.

    class UISpy: ItemsList_DisplayUI{
        
        var isLoading:Bool = false
        var presentedItems:[ItemUI] = []
        var errorIsPresented:Bool = false
        var successIsPresented:Bool = false
        
        func toggle(loading:Bool){ isLoading = loading }
        func display(items: [ItemUI]){ presentedItems = items }
        func displayError(){ errorIsPresented = true }
        func displaySuccess(){ successIsPresented = true }
    }

The “toggle(loading:)” function stores the state of the loader within the “isLoading” properties, while “presentedItems” keeps track of the items received by the “display(items:)” function. “displayError” and “displaySuccess” calls are just tracked by two booleans.

Another spy that we want to inject is dedicated to the navigator property, just to be sure that the director is implementing the expected app flow.

    class NavigatorSpy: ItemsList_Navigate{
        
        var selectedItem: Item? = nil
        var goingToAddItem: Bool = false
        var goingBack: Bool = false
        
        func gotoDetail(`for` item:Item){ selectedItem = item }
        func gotoAddItem(with delegate:ItemsAdd_Delegate?) 
             { goingToAddItem = true }
        func goBack(){ goingBack = true }
    }

Here we keep a reference to the item that will be passed to the “gotoDetail” function with the “selectedItem” property and we use booleans to verify that the other functions have been called.

The code for these classes has been written inside the class of the director test (take a look at the “ItemsListDirectorTest.swift” code), so that it will be fine to use the same names for the spies of the other actors getting a much cleaner, readable and predictable code.

INJECTING THE SPIES

The right place to inject and reset the spies is inside the test “setUp” function. This function will be called before each test is executed, resetting the state for the spies. We should also keep references to the spies objects to be able to access them from our tests. Here is the code used to perform the setup and injection of the spies:

class ItemsListDirectorTests: XCTestCase {
    ……
    here we’ve  implemented the spies classes
    ……
   
    // MARK: Setup tests 

    var director: ItemsListDirector!
    var navigator: NavigatorSpy!
    var dataManager: DataManagerSpy!
    var ui: UISpy!
    
    override func setUp() {
        super.setUp()
        
        director = ItemsListDirector()
        ui = UISpy()
        navigator = NavigatorSpy()
        dataManager = DataManagerSpy()
        
        director.ui = ui
        director.navigator = navigator
        director.dataManager = dataManager
    }

As you can see we are writing something similar to the code written inside the makeModule function of the navigator, injecting our version of UI, data manager and navigator. Now we can easily test all the director functions.

WRITING CODE FOR TESTS

The onUIReady function is really easy to test at this point. Let’s check its code again:

    func onUIReady() {
        ui.toggle(loading: true)
        dataManager.fetchItems()
    }

We know that it has to run the loader on the UI and fetch items on the dataManager. Let’s test this flow by using our spies. We need to verify the property “isLoading” to be true for the “UISpy” instance, and the property “fetched” to be true for the “DataManagerSpy” instance. Nothing else.

    func test_onUIReady(){
        
        // When
        director.onUIReady()
        
        // Then 
        XCTAssertTrue(ui.isLoading)
        XCTAssertTrue(dataManager.fetched)
    }

We can easily obtain a semantic separation of the code by using simple comments that make the code even more readable:
//Given: to define some specific conditions for the test (not needed for the onUIReady function)
//When: to define the main action that we are testing
//Then: where we write the expectation for the test

We can read the code like a sentence: GIVEN these conditions, WHEN we execute this action, THEN we expect these results.

Let’s see a more interesting test to prove that director is able to present items. In this case we want to test the “presentItems” function. This is the original code written in the director class for that function:

    func present(items:[Item]){
        self.items = items
        ui.display(items: itemsUI)
        ui.toggle(loading: false)   
    }

The list of items is stored in the director, the items are presented, and the loader is stopped.

let’s check now how we can test this behaviour:

    func test_presentItems(){
        
        // Given
        let item_one = Item(name: "one", enabled: false, date: Date())
        let item_two = Item(name: "one", enabled: false, date: Date())
        ui.isLoading = true

        // When
        director.present(items: [item_one, item_two])
        
        // Then
        XCTAssertFalse(ui.isLoading)
        XCTAssertTrue(director.items.count == 2)
        XCTAssertTrue(ui.presentedItems.count == 2)
    }

The conditions here are that we are presenting “item_one” and “item_two” and that the view is currently loading. So we initialize the item presentation in the “when” block and we expect the UI to no longer be loading, the total number of items stored in the director to be 2, and the presented items in the UI (the spy) to again be 2. We could also write a more specific test to check that the items presented are exactly the same as those we have received. I’d like to stress the fact that we are using injected code to verify the original flow of the director here. Now here’s a portion of the code for the UISpy we have previously presented:

    class UISpy: ItemsList_DisplayUI{
        ….. .        
        var presentedItems:[ItemUI] = []
        ……         
        func display(items: [ItemUI]){ presentedItems = items }
       …… 
    }

The director will call the injected “display(items:)” on the injected UI that only has the role of storing the items in a local variable. What we are testing here is that the items passed to “director.present(items:)” arrive to the UI (the fake one we’re using for the test).

TESTING THE WORKER

Similarly to how we did it for the director, we can inject spies on the worker. This time we need to inject a presenter (ItemsList_presentData), but we should also find a way to simulate calls to the “NetworkManager”.

When we need to interact with other objects that are out of the scope of the testing (or that are already tested in a different context), it is a common practice to substitute the object with a dummy of the original object. In our case the NetworkManager is an opaque object and we don’t need to test it. We can just substitute the file imported in the test target with a completely different file that has the same class name, functions and properties of the “original” one. Alternatively we can stub the network calls using libraries like “OHHTTPStubs”. Actually, considering our network manager doesn’t perform any real network call (and since it’s really simple) it’s ok to follow the first method and create a fake version included only for the test target. With this implementation we add some handy variables that define how the network manager replies to a call. Specifically, we have the storedItems property that stores the items that we want to return after the getItems call and a boolean “nextCallWillFail” which determines if we are simulating a call that fails.

    static var storedItems:[Item] = []
    static var nextCallWillFail: Bool = false

We have already introduces the spies logic, so I’m just going to show you the entire code that implements and injects the presenter spy for the worker:

class ItemsListWorkerTests: XCTestCase {
    
    // MARK: - Spies
    
    class PresenterSpy:ItemsList_PresentData {
        
        var presentedItems:[Item] = []
        var isSuccessPresented = false
        var isErrorPresented = false
        var expectation:XCTestExpectation? = nil
        
        func present(items:[Item]){ 
                presentedItems = items; expectation?.fulfill(); }
        func presentSuccess(){ isSuccessPresented = true }
        func presentError(){ isErrorPresented = true }
    }
    
    // MARK: - Test setup
    
    var worker: ItemsListWorker!
    var presenter: PresenterSpy!
    
    override func setUp() {
        super.setUp()
        
        worker = ItemsListWorker()
        presenter = PresenterSpy()
        worker.presenter = presenter
    }

With this code in mind we can implement tests for the fetchItems function. We want to test for both failing and succeeding calls, so let’s start by examining the former case:

    
    func test_fetchItemsCompleted(){
        
        // Given 
        let item_one = Item(name: "one", enabled: true, date: Date() )
        let item_two = Item(name: "two", enabled: false, date: Date())
        
        NetworkManager.storedItems = [item_one, item_two]
        NetworkManager.nextCallWillFail = false
        
        let expect = expectation(description: "fetch")
        presenter.expectation = expect
        
        // When
        worker.fetchItems()
        
        // Then
        wait(for: [expect], timeout: 1)
        XCTAssertTrue(presenter.presentedItems.count == 2)
        
    }
    

The given conditions are for our version of “NetworkManager” to return 2 items and for the network call not to fail. When the worker calls the fetchItems function we expect to see the 2 items that we have previously injected passed to the presenter. An expectation puts the test function on hold and it will be fulfilled with the injected version of “present(items:)” in the presenter spy.

Similarly, we can test the network call failure by setting nextCallWillFail to true.

In this case we do not expect elements to be passed to the presenter and we’ll just check if the error has been presented.

    func test_fetchItemsFailed(){
        
        // Given
        let item_one = Item(name: "one", enabled: true, date: Date() )
        let item_two = Item(name: "two", enabled: false, date: Date())
        
        NetworkManager.storedItems = [item_one, item_two]
        NetworkManager.nextCallWillFail = true
        
        // When
        worker.fetchItems()
        
        // Then
        XCTAssertTrue(presenter.presentedItems.count == 0)
        XCTAssertTrue(presenter.isErrorPresented)
    }
    

TESTING THE NAVIGATOR

At the moment the only thing that we test for the navigator is the “makeModule” function. We want to be sure that the architecture is respected and that all the actors have been created and assigned to the right objects.

Here is the code for the navigator test suite:

class ItemsListNavigatorTests: XCTestCase {

    func test_makeModule(){
    
        // Given 
        let module = ItemsListNavigator.makeModule()
        
        // Then
        guard let director = module.eventsHandler as? ItemsListDirector else{
            XCTFail("No director defined")
            return
        }
        
        guard let worker = director.dataManager as? ItemsListWorker else{
            XCTFail("No worker defined")
            return
        }
        
        guard let _ = director.navigator as? ItemsListNavigator else{
            XCTFail("No navigator defined")
            return
        }
        
        guard let _ = director.ui as? ItemsListScene else{
            XCTFail("no scene defined")
            return
        }
        
        guard let _ = worker.presenter as? ItemsListDirector else{
            XCTFail("no presenter defined")
            return
        }   
    }
}

Nothing crazy, right. We just ensure that the code defined by the makeModule function is generating the expected architecture with the right actors in place.

You can find all the tests for the other actors on the github project. Please note that I’m not testing the scene at all… feel free to suggest how you would test it!

Conclusions

VIPER-S is far away to be perfect, it has pros (very well organized and readable code) and cons (a lot of boilerplate code and too many files), but above anything else it was a great learning experience and I hope you have appreciated it too, if only from a purely theoretical perspective. As I told you at the beginning, there are still uncertainties and critical parts that need to be addressed, so it would be super cool if you wanted to share your point of view with me (on Twitter)!

Last but not least, high-fives to Nicola, who again took time to review the article! [NA: Back at you. Take care y’all!]

]]>
https://www.thinkandbuild.it/viper-s-writing-your-own-architecture-and-understand-its-importance-part-3/feed/ 0
VIPER-S: writing your own architecture to understand its importance (part 2) https://www.thinkandbuild.it/viper-s-writing-your-own-architecture-to-understand-its-importance-part-2/ Mon, 19 Jun 2017 06:03:40 +0000 http://www.thinkandbuild.it/?p=1333 In the previous article we introduced VIPER-S, with an overview of its Domains and Roles, we organized our modules with folders and we started writing the contract for the “ItemsList” module. With this new article we’ll complete the architecture by implementing the Actors. An Actor is the entity responsible for the implementation of all the Roles for a specific Domain that, as you may remember from the previous article, are defined by the protocols in the Contract file. That being said, we are going to implement each Actor using those protocols and connecting Actors to each other in order to achieve the architecture’s flow. Here’s the flow overview again for a quick review.

ACTORS: THE WORKER

With the previous article we have introduced the term “Worker”. This name is a good choice because it defines the class that is responsible for handling the Data domain and its roles.

First we define the file and class names. As per our convention, we use the folder structure for the prefix (Items->List) and we complete the name with the word “Worker”, which results in “ItemsListWorker”. The protocol that we are going to implement for this actor is “ItemsList_ManageData”.
At some point the worker will have to communicate with another object which is able to present information. In our architecture this object implements the “ItemsList_PresentData” protocol. Here is an image to better describe the Worker structure.

With this structure in mind let’s write the code for the Worker:

class ItemsListWorker { 
    let presenter: ItemsList_PresentData!
}

extension ItemsListWorker: ItemsList_ManageData { 

    func fetchItems(){

       // get items 
       
       ... code to obtain the items here .... 
        
       if operationCompleted  {
           presenter.present(items: Items) 
       } else {
       // or in case of errors 
           presenter.presentError()
       }
    }

    func deleteAllItems(){ 
       
       // delete items 
       
       ... code to delete the items here ....

       if operationCompleted  {
           presenter.presentSuccess()
        } else {
           presenter.presentError()
        }
    }
}

I generally prefer to split logics using extensions, so I’ll keep this rule for the rest of the architecture. If the code for a file gets longer than 200 lines, I usually create a new file with a specific name using extensions. Feel free to keep all the code in the same block if you prefer.

The code for this class is straightforward. Essentially we interact with get/set data and, depending on the result of this operation, another class is called to present the result.

Now let’s focus on the “presenter.present(items: items)” call. If you check the prototype for the “ItemsList_PresentData” protocol you’ll notice the Item type. It’s a structure that we use as model for the information retrieved from databases/network/whatever. It’s really useful to define a model to communicate between domains because we are improving our contract with more information that better defines the communication between actors.

Let’s extend the contract file with the Item structure definition: an item has a name, a creation date, and can be enabled or disabled.

struct Item{ 
     let name: String
     var enabled:Bool 
     let date: Date 
}

ACTORS: THE SCENE

This actor is essentially the user interface and it covers the UI domain. The name for this class, following our naming convention, is “ItemsListScene”.
As we have already seen for the UI domain, the Scene actor has two main roles: drawing/updating the UI elements and redirecting UI events to an event handler. In order to cover these roles it has to implement the DisplayingUI protocol and it needs to communicate with another object that implements the UIEventsHandler protocol.

Since at some point we have to communicate with the iOS navigation flow, we need to be in contact with the underneath UIKit layer and with the default system UI events. This actor is therefore a subclass of UIViewController in order to create this communication channel. If you are used to working with MVC, you’ll probably find the limited responsibility set that we are assigning to the view controller to be really unusual.

Let’s see how all this logic ends up working in our code:

class ItemsListScene: UIViewController {
    @IBOutlet var table: UITableView!
    var items: [ItemUI] = [] 
    var eventsHandler: ItemsList_UIEventsHandler!
}

// MARK: - Display UI
extension ItemsListScene: ItemsList_DisplayUI{ 

    func display(items: [ItemUI]) { 
         self.items = items 
         table.reloadData()
    }

    func display (error:String){ 
         showPopup(title: “Error”, message: error)
    }

    func displaySuccess(){ 
         showPopup(title: “Success”, message:”Operation completed”)
    }
}

// MARK: - UI Interaction 
extension ItemsListScene { 

    @IBAction func deleteAll(){ 
         eventsHandler. onDeleteAll()
    }
}

The code is straightforward, so we only need to check what the ItemUI is. As you know by now, with VIPER-S we want to define a clear distribution of responsibilities. That’s why we don’t want to overload the Scene with useless information. Its main role is to display information, so it expects a model that it can display without any further action. The “ItemUI” model is a transformation of the Item model that we’ve seen with the Worker actor. Let’s add the definition of the “ItemUI” model to the contract as we did for the Item model and then compare the two.

struct ItemUI { 
    let name: String 
    let color: UIColor
    let date: String 
}

struct Item { 
     let name: String
     var enabled: Bool 
     let date: Date 
}

As you can see the two structures are slightly different. Specifically, the “name” is the only unchanged data. For starters, the “enabled” property is no longer available. Also, we are only going to assign a color to item state, because we need to talk in “UI language” here. The UI doesn’t know what “enabled” means for an item: it just needs to know which is the color to use when drawing. The “date” has been converted from Date to String because it is the most appropriate type for a label. More on this conversion with the Director actor later.

The last part of the Scene is the code needed for the Table datasource and delegate. If you prefer, you can add this code to a brand new file where you only put the code related to the table. A good name for that would be “ItemsListSceneTable.swift”.

extension ItemsListScene: UITableViewDelegate, UITableViewDataSource {

    func tableView(_ tableView: UITableView, 
                   numberOfRowsInSection section: Int) -> Int {
        return items.count
    }

    func tableView(_ tableView: UITableView, 
                   cellForRowAt indexPath: IndexPath) -> UITableViewCell {

        let cell = tableView.dequeueReusableCell(withIdentifier: "Item")
        let item = items[indexPath.row]

        let nameLabel = cell!.viewWithTag(1) as! UILabel
        let dateLabel = cell!.viewWithTag(2) as! UILabel

        nameLabel.text = item.name
        nameLabel.color = item.color
        dateLabel.text = item.date

        return cell!
    }

    func tableView(_ tableView: UITableView, 
      didSelectRowAt indexPath: IndexPath) {

        eventsHandler.onItemSelected(index: indexPath.row)
    }

}

Nothing special happens on the item drawing side: they are presented as cells, setting UI elements directly with the information of the “ItemUI” model.
When a cell is selected the “eventsHandler” is triggered with the “onItemSelected” event.

ACTORS: NAVIGATOR

Here we implement the roles of the Navigation domain. The Navigator actor needs to implement the Navigate protocol and it needs a reference to the module’s ViewController to easily obtain access to the standard UIKit navigation functions.

Let’s check the Navigation class code one step at the time.

class ItemsAddNavigator: Navigator { 
    static func makeModule()-> ItemListUI {

        // Crete the actors
        let scene = instantiateController(id: "List", 
                                          storyboard: "Items”) as! ItemsListScene
        let navigator = ItemsListNavigator(with: scene)
        let worker = ItemsListWorker()
        let director = ItemsListDirector()

        // Associate actors 
        director.dataManager = worker
        director.navigator = navigator
        director.ui = scene
        worker.presenter = director
        scene.eventsHandler = director

        return scene
    }
}

This class is extending a generic Navigator class that is just a layer above some UIKit functions and that keeps a reference to the View Controller module. I’ve only pasted the interesting portion of the parent class here, but you can check the rest of the code in the example project on github.

import UIKit

class Navigator  {

    weak var viewController:UIViewController!

    init(with viewController:UIViewController) {
        self.viewController = viewController
    }

    static func instantiateController(id:String, storyboard:String, bundle:Bundle? = nil)-> UIViewController{

        let storyBoard = UIStoryboard(name: storyboard, bundle: bundle)
        let viewController = storyBoard.instantiateViewController(withIdentifier:id)

        return viewController
    }

    ... continue... 

Let’s go back to the makeModule() function. Here we build the module that we are discussing: to me it’s fascinating how all the actors are easily associated to each other and how all the connections describe the whole architecture in a semantic way.
Let’s discuss the implementation of the Worker and the Scene for a moment. The Scene is initialized from a storyboard and the Director is its eventsHandler. The Worker on the other side needs a presenter that, again, is the Director (we’ll see later that the Director is responsible for the communication between domains, so it’s referenced by many objects).

Here is the portion of the Navigator class where we implement the Navigation:

extension ItemsListNavigator: ItemsList_Navigate{ 

    func gotoDetails(‘for’ item: Item){ 
        let itemDetail = ItemsDetailNavigator.makeModule(with: item)
        push(nextViewController: itemDetail)
    }

    func goBack(){ 
        dismiss(animated:true)
    }
}

With the gotoDetails function we call “makeModule”. In this case we are using the “makeModule” of the module that we will present, obtaining a View Controller that we push to the navigation stack.

The “makeModule” for the ItemsDetail module needs an Item and we can easily provide it directly with the gotoDetail function. What we see here is how information can be easily shared between modules using well defined models.

ACTORS: DIRECTOR

The director is the brain of VIPER-S. Its first role is to act as a bridge between all the domains. It knows how to present data for the Scene, how to interact with the Worker to get data after a UI event, and when it’s time to call the navigator to change view.
What I love of protocols is that we can easily achieve louse coupling: the director doesn’t need to know which classes it is going to call to complete all these tasks; it only needs references to objects that implement some of the protocols defined in the contract.

Let’s see the code and examine what’s needed for each domain, one at a time:

class ItemListDirector {

    var dataManager: ItemList_ManageData!
    var ui: ItemList_UIDisplay!
    var navigator: ItemList_Navigate!

    // Data
    var items: [Item] = []

    // UI
    var itemsUI: [ItemUI] {

        get {
            return itemsData.map { (item) -> ItemUI in
                return ItemUI(
                              name: item.name,
                              color: (item.enabled) ? UIColor.green : UIColor.red,
                              date: formatDate(item.date),
            }
        }
    }
} 

At the top of the class we define the references to the other actors. Note that, as we mentioned earlier, the references are not pointers to Scene, Worker or Navigator objects, but just to objects that implement the needed protocols.
We keep a reference to the list of retrieved Items and we use a really handy dynamic property to convert from Item to ItemUI model.

The director is able to present data coming from a Worker, so it needs to implement the ItemsList_PresentData.

extension ItemsListDirector: ItemsList_PresentData{

    func present(items:[Item]){
        self.items = items
        ui.display(items: itemsUI)
    }

    func presentSuccess(){
        ui.displaySuccess()
    }

    func presentError(){
        ui.displayError()
    }
}

With the first function we have obtained some items, so we store them and, with the “itemsUI” property, we convert them to a model that can be easily presented with the UI (in this case the Scene). The other two methods are pass-through to present messages on the UI.

The HandleUIEvent implementation is straightforward:

extension ItemsListDirector: ItemsList_HandleUIEvents{

    func onUIReady() {
        dataManager.fetchItems()
    }

    func onDeleteAll(){
        dataManager.deleteAllItems()
    }

    func onItemSelected(index:Int) {
        let item = items[index]
        navigator.gotoDetail(for: item)
    }
}

When the onUIReady event is received, the Director asks the Data Manager for the items. The items will be presented with the method previously defined with the “PresentData” protocol. When an item has been selected, the Director will call the Navigator passing the selected item to show the item’s details view. After the deleteAll event is triggered, the director calls the dataManager to delete the items and then, depending on the result of the operation, the Director will present success or error with the methods we previously described.

Here is the overview of all the VIPER-S actors:

With the next and last article in the series, we are going to complete the VIPER-S code, learning how to share information between modules (we introduced the topic with the navigator class) and obviously talking about how to test this code.

Thanks again to Nicola for reviewing this article and for his very helpful hints!

]]>
VIPER-S: writing your own architecture to understand its importance (part 1) https://www.thinkandbuild.it/viper-s-writing-your-own-architecture-to-understand-its-importance-part-1/ Mon, 19 Jun 2017 06:02:44 +0000 http://www.thinkandbuild.it/?p=1312 After some months using VIPER for my apps, I started working on my own architecture: I wanted to create something better for my own needs. I then started sharing thoughts with my colleague Marco. He is on the Android side of things, but we needed to discuss to find common ground and both get to a consistent result.

We “kind of” failed and ended up with something really similar to VIPER, but! This revisited version of VIPER is what I’m currently using in my applications, so I wouldn’t consider it a failed attempt. More like a custom version of VIPER.

Along this path I learned so many things about architectures that I decided to share the experience with a series of articles. There are two things I’d like to focus on:

• the decisions taken to get to a complete architecture, highlighting rationale and doubts (some of which are still there)
• the architecture I ended up with, showing code and some practical examples.
From now on let’s call this structure VIPER-S. The S here stands for Semantic, since I tried to obtain a clearer way to name things, giving more significance to roles and communication, and adding some rules that improve code readability and writing.

Here you can download the final code of VIPER-S.

LET’S ARCHITECT

Let’s start this journey with a question. Why do we need an architecture? This question has many different answers but the most relevant are:
• to have a clear understanding of our code (and simplify its maintenance)
• To easily distribute responsibilities (and simplify team working)
• to improve testability (and simplify your life)

With these answers in mind, and moved by a profound sense of purpose, we can start planning our architecture.

I’m a big fan of “divide-et-impera”: to me it’s a way of life. That’s why I’d start by identifying all the domains and roles, the actors that are going to work on these domains and how those actors communicate with each other. Those elements are going to define the pillars of our architecture so it’s really important to have a clear understanding of what they are.

A domain is a big set that contains all the logic for an area of responsibility. A role is a little part of this set, which is more specific and identifies a precise need. An actor is a code element that implements all the functions to satisfy a role.

Let’s list and describe the domains and roles that I’ve identified to build VIPER-S.

ARCHITECTURE DOMAINS: USER INTERFACE

With the User Interface domain we show information to the users and we interact with them. Let’s see the roles for this domain.

ROLE: DISPLAY UI INFORMATION

This is a really “dumb” role. The data reaching this domain is ready to use and there’s no need to work on it any further. It only needs to be sent down to the final UI elements, with functions like this:


func display (date:String){ 
    label.text = date 
}

As you can see, the date property has probably been converted from Date to String in a previous step. We only display the ready-to-use information here. A label displays a String, so we expect to receive a String.

ROLE: HANDLE UI EVENTS

This is another not-too-pro-active role, in fact we only intercept user interactions or application lifetime events here. The actor responsible for this role is generally called within a UI target-action:


@IBAction func save(){ 
    eventsHandler.onSave()
}

ARCHITECTURE DOMAINS: DATA

The Data domain is where we obtain information from a source and we transform it to be presented later, or, alternatively, where we process a user action into something that can be stored somewhere or used somehow. Here are the roles for the Data domain.

ROLE: MANAGE DATA

Let’s imagine this role as a set of one or more workers that are responsible of handling specific jobs. They only know how to get their jobs done and they notify someone else when they have completed or failed an operation.

Here is a simple (unsafe) example of a function for this role:


func fetchItems() { 
    networkManager.get(at: itemsPath){ 
        (completed, items) in 
        if (completed){
            presenter.present(items: items)
        } else {
            presenter.presentItemFetchError()
        }
    }
}

A worker is fetching items using a network manager. It knows exactly how to use the manager, but it doesn’t work with any value coming from the network, it just passes the value to an object that in turn knows how to present it.

ROLE: PRESENT DATA

Let’s remind ourselves not to confuse presenting with displaying: when we present the information we transform it into something that will be displayed through the UI later. The object that implements this role, is often called from the Manage Data role. Returning to the previous example for the user interface, we are not setting the text value of the label here. Instead, we are transforming a Date into a readable String.


func present(date:Date){ 
    let dateString = date.localizedString(“YYYY-mm-dd”)
    ui.display(date:dateString)
}

ARCHITECTURE DOMAIN: NAVIGATION

This domain has a single role: handling the navigation for the App. The logic behind how to display the “next view” is entirely handled within this role and the same is true for its initialization and dismissal. We then need to use UIKit to work with Storyboards and call all the needed default iOS navigation functions.


func navigateToDetail(‘for’ item:Item) { 
    let itemDetail = ItemDetailNavigator.makeModule(with: item)
        push(nextViewController: itemDetail)
}

In this example the navigator is building the module (more on this term later — just look at it as a ViewController for now) that we are going to present and it pushes it to the current navigation stack.

COMMUNICATION BETWEEN DOMAINS

Let’s now introduce the “director”, the first actor for the architecture. We are going to see its code in detail later. For now let’s just talk about it as the way to build a bridge between the domains we just saw.

The director is responsible of driving the flow of information from UI events to data handling and from data handling back to the UI. It is also responsible of defining when navigation has to take place. Each operation starts from the director and each result of the operation, at some point, passes through it.

Let’s check the overview of the architecture discussed so far to better understand how communication happens:

All those arrows… but trust me, the flow is easier than it looks. Here is a real-life example: when a user taps the save button, the application has to save some information and display a dialog with a success message (or an error in case something goes wrong).

The flow will start from the left of the previous image from “handle events”. The user tap is intercepted and passed to the director. The director sends the information to the object responsible for the role “manage data”. When this object completes the operation it’s ready to present the result to the director which, at this point, is sending the information back to the UI domain which in turn knows how to display it. Now let’s say that at the end of the saving operation, instead of presenting a popup we’d rather go to another page. Easy. The director, instead of moving the flow to the UI domain, can just drive it to the Navigation.

LET’S CODE

It’s now finally time to translate the architecture logic into code!

Before starting this process we need to identify the required modules for the example we are going to implement. What’s a module, though? The architecture considers a module what we can simply call a “Page” or a “View” of the application. This means that for an application where you can list, display and add items you have 3 modules. With the MVC architecture, for instance, each module would be a view controller.

Let’s introduce the example of code that we will implement with these tutorials. We are writing an application to handle generic “Items” that can be enabled or disabled. An Item has a name, a creation date and a state (enabled or disabled). We are going to implement 3 modules to handle items: “Items List”, “Add Item” and “Item Detail”. Adding to the above the welcome pages, we have a total of 4 modules divided in 2 groups: Items and General.

ORGANIZE YOUR PROJECT

I’m a messy guy, so I need a strict schema to follow when I’m working on a big project. For VIPER-S I decided to have a very clear folder structure. This is part of defining the architecture, after all.

Each module group has a root folder. In this case “General” and “Items” (I’d rather create real folders for the module groups, not just the Xcode project folders). Each module has its own folder. For Items we have “Add”, “List” and “Detail” and for “General” just “Welcome”.

This is the current folder structure for the project:

Each file and class follows a simple naming convention: prefixed using the folder structure that contains it, and then named after its specialization. For example, the director of the List module for Items is called “ItemsListDirector.swift” and the class name is “ItemsListDirector”. This will be really useful when used with autocomplete. When you start writing “List…” you’ll get all the classes for this group. Then “…Add” to get only classes for ListAdd module. It’s a really handy convention! 🙂

We’ll discuss other naming conventions later. This is just a simple rule that creates a shared logic for name definition and project organization. It’s a life-saver if you, like me, are not really good at keeping your naming style unchanged over the course of very long projects.

THE CONTRACT AND PROTOCOLS DEFINITION

Let’s begin by writing a contract that describes the architecture for each module through protocols. A contract is the part of the architecture where you can define precisely what a module does. It’s a sort of documentation for the module.

We’ll start from the “Items List” module, translating the roles previously described into protocols. For this module we know that it shows the list of “items” through a table and it has a “delete all” button to flush all the available items.

The “Display UI” role has to display items, errors and success messages. A good protocol to describe this role would be:


protocol ItemList_DisplayUI {
    func display(items: [ItemUI])
    func display(error:String)
    func displaySuccess()
}

The itemUI is a base object defined by simple types like String, UIImage or Bool. We’ll discuss it later.
All the functions that update UI elements with a UI model are prefixed with the “display” word. Being really strict in the naming convention is important, because I don’t want to have doubts like “should I call this function ‘show’, ‘display’, ‘update’ WAT?!”. All the protocols have a predefined set of verbs/keywords to use.

Note: here is another little naming convention that I’m using. Considering we will end up with a considerable number of files and classes for a single module, I found it useful to differentiate protocols from classes. That’s why I’m putting an underscore between the module name and the role name, obtaining the protocol name (ItemList_DisplayUI). Trust me, you’ll love this little trick later, when you write your own code and you want to autocomplete a class or a protocol name quickly.

The “Handle UI events” role has 3 functions: it has to say when the UI is ready (i.e. when viewDidLoad is called), it has to trigger an event when the user taps the “Delete All” button, and another event when an item is selected from the table.


protocol ItemsList_HandleUIEvents {
    func onUIReady()
    func onDeleteAll()
    func onItemSelected(index:Int) 
}

In general, the functions for this role start with the prefix “on” followed by the name of the handled event.

Let’s move on to the Data domain. The first role is “Manage Data” and it has 2 functions: fetch the items and delete all the items.


protocol ItemsList_ManageData{ 
    func fetchItems()
    func deleteAllItems()
}

The second role is “Present Data”. With this role we want to present items when available and we could also present generic success or error messages.


protocol ItemsList_PresentData{ 
    func present(items:[Item])
    func presentSuccess()
    func presentError()
}

Personally I love this notation and and I find it extremely readable. The verb “present” is prefix to all the protocol functions.

The Navigation domain’s only role is “Navigate”. From the ItemsList module we know that we can select an item and see its detail in a dedicate view. We can also go back to the Welcome view, or more generically, we can just go back to the previous view.


protocol ItemsList_Navigate{ 
    func gotoDetail(‘for’ item:Item)
        func goBack()
}

Functions for the navigate protocol are prefixed with “go/goto”.


protocol ItemsList_Navigate{
    func gotoDetail(`for` item:Item)
    func goBack()
}

This is the full code of the ItemsListProtocol.swift file. As you can see, if you know each role’s functionality you can easily understand what this module does:


protocol ItemList_DisplayUI {
    func display(items: [ItemUIModel])
    func displayError()
    func displaySuccess()
}

protocol ItemsList_HandleUIEvents {
    func onUIReady()
    func onDeleteAll()
    func onItemSelected(index: Int)
}

protocol ItemsList_ManageData{ 
    func fetchItems()
    func deleteAllItems()
}

protocol ItemsList_PresentData{ 
    func present(items:[Item])
    func presentSuccess()
    func presentError()
}

protocol ItemsList_Navigate{
    func gotoDetail(`for` item:Item)
    func goBack()
}

This concludes part one of the series. In the coming parts we’ll dive deeper into the architecture’s code, writing all the actors involved. We’ll complete the ItemList module and we’ll talk about how to handle some specific patterns like passing information to another module (i.e. when you select an Item and you navigate to the detail page) and getting information from another module (i.e. when you add a new Item in the ItemsAdd module and you need to notify the ItemsList module to refresh the list).

Thanks for reading this far and stay tuned for the next installments in the series. Ciao!

]]>
Custom Controls: button action with confirmation through 3D Touch https://www.thinkandbuild.it/custom-controls-3d-touch-confirm/ Tue, 03 Jan 2017 21:30:02 +0000 http://www.thinkandbuild.it/?p=1287 3D touch is the ability to track user’s touch pressure level and, in my opinion, is one of the most interesting and unexploited feature of iOS touch handling system.

With this tutorial we are going to build a custom button that leverages on 3D touch to ask user to confirm button action and, if 3D touch is not available on user device, it just fallback to a different behaviour. Here is a quick video to show you how this control works:

1. When user’s touch begins, a circular progress bar keeps track of user touch pressure. The circle will be filled in relation to user pressure, the harder the button is pressed the more the circle is filled (I’ll show you later how to we simulate this behaviour on devices that do not support 3D touch).

2. When the circle is fully filled, it becomes an active button, label changes to “OK” and color to green, indicating that the action can be confirmed. Now user can just swipe up and release his finger over the circle to confirm the action.

In general you ask user to confirm a delete-action using a pop up. I really love to experiment with UX interactions, and in my opinion this control can easily substitute the “standard” flow. You should try this behaviour on a physical device to understand how easy it is to interact with this control 🙂

Let’s code

As first, if you don’t know how custom controls work I strongly encourage you to read my previous article about building custom controls and download the tutorial project to easily follow the next steps.

Drawing the UI

The code to draw the circle and the label displayed when user starts interacting with the button is straightforward, let’s check it:


    private let circle = CAShapeLayer()
    private let msgLabel = CATextLayer()
    private let container = CALayer()
    .
    .
    .
   
    private func drawControl(){
        
        // Circle
        var transform = CGAffineTransform.identity
        circle.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        circle.path = CGPath(ellipseIn: CGRect(x: 0,y: 0,width: size.width, height: size.height),
                             transform: &transform)
        
        circle.strokeColor = UIColor.white.cgColor
        circle.fillColor = UIColor.clear.cgColor
        circle.lineWidth = 1
        circle.lineCap = kCALineCapRound
        circle.strokeEnd = 0 // initially set to 0
        circle.shadowColor = UIColor.white.cgColor
        circle.shadowRadius = 2.0
        circle.shadowOpacity = 1.0
        circle.shadowOffset = CGSize.zero
        circle.contentsScale = UIScreen.main.scale

        // Label
        msgLabel.font = UIFont.systemFont(ofSize: 3.0)
        msgLabel.fontSize = 12
        msgLabel.foregroundColor = UIColor.white.cgColor
        msgLabel.string = ""
        msgLabel.alignmentMode = "center"
        msgLabel.frame = CGRect(x: 0, y: (size.height / 2) - 8.0, width: size.width, height: 12)
        msgLabel.contentsScale = UIScreen.main.scale
        
        // Put it all together
        container.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
        container.addSublayer(msgLabel)
        container.addSublayer(circle)
        
        layer.addSublayer(container)
    }

The circle and msgLabel layers are initialized and attached to the container layer.
There is nothing special to highlight in this code, just note that the strokeEnd property of circle is set to 0.
This property is really useful to easily obtain nice animation on a shape layer. Briefly, the path that describes the shape layer draws its stroke between strokeStart and strokeEnd, the default value for these properties are 0 and 1, so playing with this range you can easily get nifty drawing animations. For this control we set strokeEnd to 0 and we animate it reflecting user touch pressure.

Control States

This controller defines its UI and behaviour with a simple state machine described by the ConfirmActionButtonState enum.


enum ConfirmActionButtonState {
    case idle
    case updating
    case selected
    case confirmed
}

When no action is taken on the control, the state is idle. When user interaction starts the state changes to updating. When the circle is completely filled the state is selected and if users has already moved is finger inside the green circle the state is confirmed.

When user lift his finger, if the control state is equal to confirmed, we finally propagate the button action, since we can considered it as confirmed, otherwise the state will just moves back to idle

Handling user touch

We override beginTracking, continueTracking and endTracking methods to easily respond to user touches and grab all the information for the control.

Within these methods we have to track 3 elements:
1. The touch location. Useful to define where to draw the container layer (the one that contains the circle and the message label).
2. The touch force value. Needed to animate the circle and understand wether to set the control state to updating or to selected and confirmed.
3. The updated touch location. We need to track touch position to verify if it is contained into the container layer bounds and, in that case, set state to confirmed or updating.

Let’s see the code for the beginTracking method.


    override func beginTracking(_ touch: UITouch, with event: UIEvent?) -> Bool {
        super.beginTracking(touch, with: event)
        
        if traitCollection.forceTouchCapability != UIForceTouchCapability.available{
  // fallback code ….
        }
        
        let initialLocation = touch.location(in: self)
        
        CATransaction.begin()
        CATransaction.setDisableActions(true)
        container.position = initialLocation ++ CGPoint(x: 0, y: -size.height)
        CATransaction.commit()
        
        return true
    }

We check for device touch force capabilities and if the hardware doesn’t support this feature we execute a fallback code (we’ll talk about fallback behaviour later) . Then the touch location is used to define the container layer position, subtracting the control height. The ++ operand is defined at the end of the file to permit sum of CGPoint elements.
To avoid implicit system animations, the container position is assigned after the setDisableActions call (more information about this technique here [CALayer: CATransaction in Depth](http://calayer.com/core-animation/2016/05/17/catransaction-in-depth.html#preventing-animations-from-occurring) )

From the continueTracking function we perform all the needed operation to verify the control state.


    override func continueTracking(_ touch: UITouch, with event: UIEvent?) -> Bool {
        super.continueTracking(touch, with: event)
        lastTouchPosition = touch
        updateSelection(with:touch)
        
        return true
    }

The lastTouchPosition will be used later to support older devices that doesn’t have 3D touch capability. While the updateSelection method receives the updated touch.
Here is the code for updateSelection:


    private func updateSelection(with touch: UITouch) {
        
        if self.traitCollection.forceTouchCapability == UIForceTouchCapability.available{
            intention = 1.0 * (min(touch.force, 3.0) / min(touch.maximumPossibleForce, 3.0))
        }
        
        if intention > 0.97 {
            if container.frame.contains(touch.location(in:self)){
                selectionState = .confirmed
            }else{
                selectionState = .selected
            }
            updateUI(with: 1.0)
        }
        else{
            if !container.frame.contains(touch.location(in:self)){
                selectionState = .updating
                updateUI(with: intention)
            }
        }
    }

Again, we check for force availability first and, if the feature is supported, we calculate the current “user intention”. The intention property can be assigned with a value that goes from 0 (when no touches are observed) to 1 (when touch reaches the maximum needed force). The operation to obtain this value is extremely simple: we just divide the current touch force by the maximum force, normalizing the value to a range valid for the “intention” property. Trying this code on a real device I found out that user has to press with to much force to reach the maximum value, for this reason I’ve added a cap of 3.0 to reduce the needed touch pressure.
(Actually I’m not so sure the name “intention” is a good choice… native speakers please, let me know if the name is clear enough to describe the property role :P).

Now that the intention value has been calculated for this touch cycle, we can update control state and UI. If the value is greater then 0.97 and user touch has already moved inside the green circle, the control state is confirmed, otherwise, if user is still pressing the “delete” button, the current state is set toselected. When the value is less than 0.97 we say the control is just updating.

The updateUI function takes the current intention value and passes it to the endStroke property of the circle layer. Any other UI customization related to the “intention” could be defined inside this method.


    private func updateUI(with value:CGFloat){
        circle.strokeEnd = value
    }

Finally, we override the endTracking method to trigger the valueChanged event if the current control state is equal to confirmed.


    override func endTracking(_ touch: UITouch?, with event: UIEvent?) {
        super.endTracking(touch, with: event)
        intention = 0
        
        if selectionState == .confirmed{
            self.sendActions(for: UIControlEvents.valueChanged)
        }else{
            selectionState = .idle
            circle.strokeEnd = 0
        }
    }

If you check the Main.storyboard file you will see that the valueChanged action for the “delete” button has been assigned to the confirmDelete method of ViewController and, obviously, the custom class value for the delete button has bee set to ConfirmActionButton.

Control state and UI

The control UI is updated in relation to the current control state. To simplify this behaviour the code to update UI has been placed directly inside the didSet observer for the selectionState property.

The code is straightforward, just change circle Color and label message depending on the new state and call setNeedsLayout on circle to force its layout to be redrawn.


    private var selectionState:ConfirmActionButtonState = .idle {
        didSet{
            switch self.selectionState {
            case .idle, .updating:
                if oldValue != .updating || oldValue != .idle {
                    circle.strokeColor = UIColor.white.cgColor
                    circle.shadowColor = UIColor.white.cgColor
                    circle.transform = CATransform3DIdentity
                    msgLabel.string = ""
                }
                
            case .selected:
                if oldValue != .selected{
                    circle.strokeColor = UIColor.red.cgColor
                    circle.shadowColor = UIColor.red.cgColor
                    circle.transform = CATransform3DMakeScale(1.1, 1.1, 1)
                    msgLabel.string = "CONFIRM"
                }
                
            case .confirmed:
                if oldValue != .confirmed{
                    circle.strokeColor = UIColor.green.cgColor
                    circle.shadowColor = UIColor.green.cgColor
                    circle.transform = CATransform3DMakeScale(1.3, 1.3, 1)
                    msgLabel.string = "OK"
                }
            }
            circle.setNeedsLayout()
        }
    }

Fallback code

Just a quick note about the fallback for devices that don’t support 3D touch. I really wanted to keep the same design for all the devices so I decided to setup the intention property with a timed updated, relying on time instead of touch force. All the logics are identical to what we have previously discussed, but the intention property is updated automatically each 0.1 second when user is pressing the delete button. Here is the code for the beginTracking function where the timer is defined:


        if traitCollection.forceTouchCapability != UIForceTouchCapability.available{
            timer = Timer.scheduledTimer(timeInterval: 0.1,
                                         target: self,
                                         selector: #selector(ConfirmActionButton.updateTimedIntention),
                                         userInfo: nil,
                                         repeats: true)
            timer?.fire()
        }

the updateTimedIntention is responsible to update the intention value to reach completion (1.0) after 2 seconds:


    func updateTimedIntention(){
        intention += CGFloat(0.1 / 2.0)
        updateSelection(with: lastTouchPosition)
    }

Conclusions

I really enjoyed writing this code and I think I’m going to talk about other custom controls soon. In my opinion still there is a lot of space to experiment on custom UI and improve user experience leveraging on new devices feature… I hope this tutorial might inspire you 🙂

]]>
Implementing the Twitter iOS App UI (Update: Swift 3) https://www.thinkandbuild.it/implementing-the-twitter-ios-app-ui/ Thu, 08 Dec 2016 19:43:45 +0000 http://www.thinkandbuild.it/?p=927 After using Twitter’s iOS App for a while, I started looking at it with the developer’s eye and noticed that some of the subtle movements and component interactions are extremely interesting. This sparked my curiosity: how did you guys at Twitter do it?

More specifically, let’s talk about the profile view: isn’t it elegant? It looks like a default view, but if you look closely you’ll notice there’s much more. Layers overlap, scale and move in unison with the scrollview offset, creating an harmonic and smooth ensemble of transitions… I got carried away, but yes you guessed it, I love it.

So, let’s do it and recreate this effect right away!

First things first, here is a preview of the final result for this tutorial:

Structure’s description

Before diving into the code I want to give you a brief idea of how the UI is structured.

Open the Main.storyboard file. Inside the only View Controller’s view you can find two main Objects. The first is a view which represents the Header and the second is a Scrollview which contains the profile image (let’s call it Avatar) and the other information related to the account like the username, and the follow-me button. The view named Sizer is there just to be sure that the Scrollview content is big enough to enable vertical scrolling.

As you can see, the structure is really simple. Just note that I’ve put the Header outside the Scrollview, rather than place it together with the other elements, because, even though it might not be strictly necessary, it gives the structure more flexibility.

Let’s code

If you look carefully at the final animation you’ll notice you can manage two different possible actions:

1) User pulls down (when the Scrollview content is already at the top of the screen)

2) User scrolls down/up

This second action can in turn be split in four more steps:

2.1) Scrolling up, the header resizes down until it reaches Navigation Bar default size and then it sticks to the top of the screen.

2.2) Scrolling up, the Avatar becomes smaller.

2.3) When the header is fixed, the Avatar moves behind it.

2.4) When the top of the User’s name Label reaches the Header, a new white label is displayed from the bottom center of the Header. The Header image gets blurred.

Open ViewController.swift and let’s implement these steps one by one.

Setup the controller

The first thing to do is obviously to get information about the Scrollview offset. We can easily do that through the protocol UIScrollViewDelegate implementing the scrollViewDidScroll function.

The simplest way to perform a transformation on a view is using Core Animation homogeneous three-dimensional transforms, and applying new values to the layer.transform property.

This tutorial about Core Animation might come in handy: http://www.thinkandbuild.it/playing-around-with-core-graphics-core-animation-and-touch-events-part-1/.

These are the first lines for the scrollViewDidScroll function:

 
   var offset = scrollView.contentOffset.y
   var avatarTransform = CATransform3DIdentity
   var headerTransform = CATransform3DIdentity

Here we get the current vertical offset and we initialize two transformations that we are going to setup later on with this function.

Pull down

Let’s manage the Pull Down action:

 
if offset < 0 {

     let headerScaleFactor:CGFloat = -(offset) / header.bounds.height
     let headerSizevariation = ((header.bounds.height * (1.0 + headerScaleFactor)) - header.bounds.height)/2.0
     headerTransform = CATransform3DTranslate(headerTransform, 0, headerSizevariation, 0)
     headerTransform = CATransform3DScale(headerTransform, 1.0 + headerScaleFactor, 1.0 + headerScaleFactor, 0)

     header.layer.transform = headerTransform
}

First, we check that the offset is negative: it means the user is Pulling Down, entering the scrollview bounce-area.

The rest of the code is just simple math.

The Header has to scale up so that its top edge is fixed to the top of the screen and the image is scaled from the bottom.

Basically, the transformation is made by scaling and subsequently translating to the top for a value equal to the size variation of the view. In fact, you could achieve the same result moving the pivot point of the ImageView layer to the top and scaling it.

headerScaleFactor is calculated using a proportion. We want the Header to scale proportionally with the offset. In other words: when the offset reaches the double of the Header’s height, the ScaleFactor has to be 2.0.

The second action that we need to manage is the Scrolling Up/Down. Let’s see how to complete the transformation for the main elements of this UI one by one.

Header (First phase)

The current offset should be greater than 0. The Header should translate vertically following the offset until it reaches the desired height (we will speak about Header blur later).


headerTransform = CATransform3DTranslate(headerTransform, 0, max(-offset_HeaderStop, -offset), 0)

This time the code is really simple. We just transform the Header defining a minimum value that is the point at which the Header will stop its transition.

Shame on me: I’m lazy! so I’ve hardcoded numeric values like offset_HeaderStop inside variables. We could achieve the same result in other elegant ways, calculating UI element positions. Maybe next time.

Avatar

The Avatar is scaled with the same logic we used for the Pull Down but in this case attaching the image to the bottom rather than the top. The code is really similar except for the fact that we slow down the scaling animation by 1.4.


// Avatar -----------

let avatarScaleFactor = (min(offset_HeaderStop, offset)) / avatarImage.bounds.height / 1.4 // Slow down the animation
let avatarSizeVariation = ((avatarImage.bounds.height * (1.0 + avatarScaleFactor)) - avatarImage.bounds.height) / 2.0
avatarTransform = CATransform3DTranslate(avatarTransform, 0, avatarSizeVariation, 0)
avatarTransform = CATransform3DScale(avatarTransform, 1.0 - avatarScaleFactor, 1.0 - avatarScaleFactor, 0)

As you can see, we use the min function to stop the Avatar scaling when the Header transformation stops (offset_HeaderStop).

At this point, we define which is the frontmost layer depending on the current offset. Until the offset is less than or equal to offset_HeaderStop the frontmost layer is the Avatar; higher than offset_HeaderStop it’s the Header.


           if offset <= offset_HeaderStop {

                if avatarImage.layer.zPosition < header.layer.zPosition{
                    header.layer.zPosition = 0
                }

            }else {
                if avatarImage.layer.zPosition >= header.layer.zPosition{
                    header.layer.zPosition = 2
                }
            }

White Label

Here is the code to animate the white Label:


let labelTransform = CATransform3DMakeTranslation(0, max(-distance_W_LabelHeader, offset_B_LabelHeader - offset), 0)
headerLabel.layer.transform = labelTransform

Here we introduce two new shame-on-me variables: when offset is equal to offset_B_LabelHeader , the black username label touches the bottom of the Header.

distance_W_LabelHeader is the distance needed between the bottom of the Header and the White Label to center the Label inside the Header.

The transformation is calculated using this logic: the White Label has to appear as soon as the Black label touches the Header and it stops when it reaches the middle of the header. So we create the Y transition using:


max(-distance_W_LabelHeader, offset_B_LabelHeader - offset)

Blur

The last effect is the blurred Header. It took me three different libraries to find the right solution… I’ve also tried building my super easy OpenGL ES helper. But updating the blur in realtime always ended up to be extremely laggy.

Then I realized I could calculate the blur just once, overlap the not-blurred and the blurred image and just play with alpha value. I’m pretty sure that’s what Twitter devs did.

In viewDidAppear we calculate the Blurred header and we hide it, setting its alpha to 0:


// Header - Blurred Image

headerBlurImageView = UIImageView(frame: header.bounds)
headerBlurImageView?.image = UIImage(named: "header_bg")?.blurredImage(withRadius: 10, iterations: 20, tintColor: UIColor.clear)
headerBlurImageView?.contentMode = UIViewContentMode.scaleAspectFill
headerBlurImageView?.alpha = 0.0
header.insertSubview(headerBlurImageView, belowSubview: headerLabel)

The blurred view is obtained using FXBlurView.

In the scrollViewDidScroll function we just update the alpha depending on the offset:


headerBlurImageView?.alpha = min (1.0, (offset - offset_B_LabelHeader)/distance_W_LabelHeader)

The logic behind this calculation is that the max value has to be 1, the blur has to start when Black Label reaches the header and it has to stop when the white label is at its final position.

That’s it!

I hope you’ve enjoyed this tutorial (despite the shame-on-me variables :P). Studying how to reproduce such a great animation was a lot of fun for me.

And poke me on Twitter if you have any interesting UIs you’d like to see x-rayed and rebuilt: we could work on it together! 🙂

A big thanks goes to Nicola who has taken time to review this article!

]]>
Quick Guide: Animations with UIViewPropertyAnimator https://www.thinkandbuild.it/quick-guide-animations-with-uiviewpropertyanimator/ Sun, 20 Nov 2016 22:55:58 +0000 http://www.thinkandbuild.it/?p=1248 With iOS 10 came a bunch of new interesting features, like the UIViewPropertyAnimator, a brand new class that improves animation handling.
The view property animator completely changes the flow that we are used to, adding a finer control over the animations logic.

A simple animation

Let’s see how to build a simple animation to change the center property of a view.


let animator = UIViewPropertyAnimator(duration: 1.0, curve: .easeOut){
	AView.center = finalPoint
}
animator.startAnimation()

There are at least 3 interesting things to note:
1) The animation is defined through a closure, really similarly to the UIView animation helpers “UIView.animation(duration:…)”.
2) An object, the animator, is returned.
3) The animation is not started immediately, but is triggered with the startAnimation() function.

Animation state

The major changes in the way we animate an element are related to the fact that with a property animator comes a fully state machine logic. Through the UIViewAnimating protocol are implemented features to manage the state of the animation in a simple and clear way, implemented by functions like startAnimation, pauseAnimation and stopAnimation. Calling those functions we update the state value, making it switch between active, inactive and stopped.

The animation state is active when the animation is started or paused, it is inactive when it has been just initialized and not started or, when it’s completed. It is better to clarify that there is a little difference between inactive and stopped. When the animation completes after a stop commands or it completes, the state becomes stopped, internally the animators calls the function finishAnimation(at:) to mark the animation as completed, set the state as inactive and eventually call any completion block (more on that later).

Animation options

As you have probably noticed with the previous example, together with the animation block we have defined two parameters: the duration of the animation and the animation curve, a UIViewAnimationCurve instance that can represents the most common curves (easeIn, easeOut, linear or easeInOut).

In case you needed more control over the animation curve you can use a custom bezièr curve defined by 2 control points.


let animator = UIViewPropertyAnimator(
               duration: 1.0, 
               point1: CGPoint(0.1,0.5), 
               point2: CGPoint(0.5, 0.2){

        AView.alpha = 0.0
}

(If the bezier curves are not enough you could even specify a completely custom curve with a UITimigCurveProvider)

Another interesting option that you can pass to the constructor is the dampingRatio value. Similarly to the UIView animation helpers, you can define a spring effect specifying a damping value from 0 to 1.


let animator = UIViewPropertyAnimator(
               duration: 1.0,
               dampingRatio:0.4){

        AView.center = CGPoint(x:0, y:0)
}

Delaying the animation is quite easy too, just call the startAnimation function with the afterDelay param.


animator.startAnimation(afterDelay:2.5)

Animation Blocks

UIViewPropertyAnimator adopts the UIViewImplicitlyAnimating protocol that provides the animator with some other interesting abilities. As example, you can specify multiple animations blocks in addition to the first one specified during initialization.


// Initialization
let animator = UIViewPropertyAnimator(duration: 2.0, curve: .easeOut){
	AView.alpha = 0.0
}
// Another animation block
animator.addAnimation{ 
	Aview.center = aNewPosition
}
animator.startAnimation()

You can also add animations block to an animation the is already running, the block will be immediately executed using the remaining time as duration of the new animation.

Interacting with the animation flow

As we have already stated we can easily interact with the animation flow calling startAnimation, stopAnimation and pauseAnimation. The default flow of the animation, from the start to the end point, can be modified through the fractionComplete property. This value indicates the percentage of completion of the animation with a value that goes from 0.0 to 1.0. You can modify the value to drive the flow as you prefer (example: the user might change the fraction in real time using a slider or a pan gesture).


animator.fractionComplete = slider.value

In some cases you might want to perform actions when the animation completes its running. The addCompletion function let you add a block that will be triggered when the animation completes.


animator.addCompletion { (position) in
	print("Animation completed")
}

The position is a UIViewAnimatingPosition and it specifies whether the animation stopped a its starting, end or current position. Normally you will receive the end value.

That’s all for this quick guide.
I can’t wait to play more with this new animation system to implement some really nice UI effects! I’ll share my experiments on Twitter 😉 Ciao!

]]>