Tuesday, November 7, 2017

Service Principals and Azure AD

There are a few scenarios with Azure AD that folks commonly run in to.

I have not blogged about the relationships of an Azure Tenant and Azure AD.  So let me briefly mention the relationship between these two.
(I will be repeating this all in future posts, as this relationship is important to grok)

An Azure Tenant is the Account that you or your company has in Azure.  It is divided into Subscriptions. 
Each Subscription is where you 'consume' in Azure; and it also serves as an isolation boundary (as in, resources in different services can only talk to each other through public entry points - they cannot directly touch each other).
Below that you have Resource Groups, which are management containers (not isolation).
And then you have the actual resources that you get billed for consuming.

All of this together is an Azure Tenant - or phrased another way, you are a Tenant of Azure and this is your playground and thus billing entity.
I will keep using the phrase Azure Tenant in this way.
(I run across MSFT folks that use the word Subscription when referring to the Tenant, and it is just plain incorrect and thus confusing as to the ramifications)

Azure AD is an entirely separate thing.  It is this huge multi-tenant cloud based identity provider, with a number of cool features and touch points.
An Azure Tenant must have an associated Azure AD, but an Azure AD has no dependency on an Azure Tenant (or an Office365 tenant - which is yet another entity).

A single company can have multiple Azure AD's (which is highly likely), they could also have multiple Azure Tenants (which is not very likely, but possible).

A single Azure Tenant can only be associated with one Azure AD.  Nuance here; the tenant has one Azure AD, but people from other Azure ADs can be granted access.  But the invited accounts are foreign accounts.

I bring all of this up since it is this multi-pronged association that usually gets folks into a pickle.

The "Service Principal" is a term that has been within the IT world for a long time.  It describes the user account that a particular application or service runs under.  If that application needs to access resources on the network, the Service Principal user account for that particular service is used.

This has worked well for decades in the enterprise with Active Directory.  And now we need to take this to the cloud.  And it works a bit differently with Azure Active Directory.

Within Azure Active Directory the service principal is called an "App Registration".  And just like with the enterprise, you would have a unique app registration for each application.
This is different than a user account that you simply grant access to Azure resources.  While it really appears to be the same thing, it actually isn't.

Now, as we put things together a number of questions arise.  Which Azure AD should my app registration reside in?  Is there any limitation on the user account?  What resources does the app registration need to access?

Most likely you are using the app registration to interact with your Azure tenant resources.  For example, the app registration is used to provision new machines, or to power machines on and off.  Essentially to perform lifecycle events on your Azure resources.

This means that the app registration must be "native" to the Azure Tenant Azure AD - regardless of what Azure AD your user accounts reside in.  You cannot simply use a random user account.

If you manually create your app registration, you need to be a user administrator in your Azure AD.  And then you can grant the permissions to the Subscription or Resource Groups and you are golden.

https://itproctology.blogspot.com/2017/04/manually-creating-service-principal-for.html

If you are using some other app to create the app registration programmatically things immediately get unique.  All of a sudden your user account matters.
And to programmatically create an app registration, your user account needs to be native to the 
Azure AD.
This is a special kind of native.  The Azure AD account that you use must be an "onMicrosoft" account within that Azure AD.  You cannot use a user account that is synchronized with your on-premises Active Directory.  It does not even matter if your user is a domain admin.

Each Azure AD begins life as an "onMicrosoft" entity.  If you look at the properties of the Azure AD, you will see the information about its original creation, such as brianeh.onmicrosoft.com.  And this remains even if you add a vanity domain, such as ITProctology.com to the Azure AD.

I mention all of this because you might be in a position where you need to create a special Azure AD account, so that your app can create its own app registration (Citrix Cloud does this, RDMI does this).

In your Azure AD, create a new cloud user, using name@domain.onmicrosoft.com  This will create a cloud user that is native only to the Azure AD, and this user will have the full permissions to the API that it needs to create an app registration.

In your app you need to then use the credentials of this new user account so that it can access the API and self create its app registration.

A lot of explanation just to get you to the answer, but the why is often as important as the what.


Monday, October 9, 2017

The gotchas of Azure AD Domain Services in ARM

Not too awful long ago Azure Active Directory Domain Services moved over to the ARM portal from the Azure Classic portal.

Yea! the world said.  And there was much rejoicing.

Now, the real world impacts of this.

There are a few scenarios with Azure AD that folks commonly run in to.

I have not blogged about the relationships of an Azure Tenant and Azure AD.  So let me briefly mention the relationship between these two.
(I will be repeating this all in future posts, as this relationship is important to grok)

An Azure Tenant is the Account that you or your company has in Azure.  It is divided into Subscriptions. 
Each Subscription is where you 'consume' in Azure; and it also serves as an isolation boundary (as in, resources in different services can only talk to each other through public entry points - they cannot directly touch each other).
Below that you have Resource Groups, which are management containers (not isolation).
And then you have the actual resources that you get billed for consuming.

All of this together is an Azure Tenant - your phrased another way, you are a Tenant of Azure and this is your playground and thus billing entity.
I will keep using the phrase Azure Tenant in this way.
(I run across MSFT folks that use the word Subscription when referring to the Tenant, and it is just plain incorrect and thus confusing as to the ramifications)

Azure AD is an entirely separate thing.  It is this huge multi-tenant cloud based identity provider, with a number of cool features and touch points.
An Azure Tenant must have an associated Azure AD, but an Azure AD has no dependency on an Azure Tenant (or an Office365 tenant - which is yet another entity).

A single company can have multiple Azure AD's (which is highly likely), they could also have multiple Azure Tenants (which is not very likely, but possible).

A single Azure Tenant can only be associated with one Azure AD.  Nuance here; the tenant has one Azure AD, but people from other Azure ADs can be granted access.  But the invited accounts are foreign accounts.

Now.  Some background into the processes that get us into the strange places that folks end up in.

When an Azure Tenant is created an Azure AD is created for it.
So you end up with some Azure AD such as you@tenantName.onmicrosoft.com

This is fine.  It gets you up and running and then you add your admins which might be your.admin@yourcomapny.com  and they get invited.  Everything in the Azure Portal works.  Now, lets get into the cases that won't work in this scenario.

This actually puts you in a very common scenario, the scenario where the Azure AD associated with your Azure Tenant is not the same Azure AD where your corporate user accounts reside.

Now, if you only want to use Azure AD RBAC to add your IT folks to the Azure Tenant for administrative purposes, this is fine.  And thus, a more common scenario than most folks want to realize.

Now, lets get to Azure AD Domain Services in ARM.  There is a security boundary here.  The Azure Tenant.  Therefore, when you enable AAD DS in ARM you are restricted to the Azure AD that is associated with the Azure Tenant.

Oh, your Azure Tenant Azure AD is not the one with your users?  Oh my!  How do we resolve this?
(trust me, the sarcasm is real here, I can't tell you how many times I have spoken to folks about this and it takes a while for all the dots to connect before they realize the dilemma).

There are three ways to resolve this:
  1. 're-parent' the Azure Tenant.  What does this mean?  It means that you make some other Azure AD the primary Azure AD for your Azure Tenant.  There is an option in the Azure Portal "Move to another Directory".  The impacts:  If you had any RBAC set up, you will break it, and therefore need to set it up all over again.
  2. Use a vNet.  When logged on to the Azure AD where the users are, create a new Azure Tenant and subscription.  Turn on AAD DS there.  Then set up a gateway between the vNet in this subscription where your AAD DS and user accounts are, and a vNet in a subscription of the Azure Tenant where your workloads are that require the domain services provided by AAD DS.
  3. Don't use AAD DS.  Stand up a Windows Server Domain Service VM (more than one for a proper deployment) and use Azure AD Connect to sync the users with the AD domain.
In the end, this is about your user accounts, the reasons why you wanted AAD DS in the first place (you need NTLM or Kerberos for any reason).
Yes, AAD DS is convenient, but the security models that forced it into this particular assumption is not always in line with reality.

Friday, October 6, 2017

Day two as a free agent - looking sideways

If you missed the tale, I was RIF'd and I thought I would spend some time blogging about the experience and the whatever I decide to do next.

All of this went down on Wednesday, October 4, 2017 and pretty much wrapped up by noon.  Out of the building. 
My user account was gone by 2pm.  (the things a calendar invite that you included your personal email account as an attendee can show you - when that other attendee is an object that can't be resolved)

Talking last night, the wife was lamenting that she could not ping me on Skype any longer.  I mentioned that I do still have a Skype for Business account thanks to the MVP program.  And I proudly stated, the domain is blocked by China (ITProctology.com).  She simply looked at me quizzically and said she 'will think about it'

The day started like any other day.  I got up at the 'usual' time, cleaned up, scooped the cat litter, got the kid out of bed.

Kissed the lovely wife as she went off to work, got the kid to the bus stop, checked in the morning HAM radio net from the car.  Then drove all the way...  back home.

7:30am
Well, now what?  I have had a list of all kinds of things I have needed to get done.

Cold morning, checked the air in the tires, a bit low.  Grabbed the air compressor and resolved that.

Kind of chilly in the workshop, fired up the stove.
Looked at the wood scraps and thought of the cat tree the wife wants me to build.  Not feeling it.

7:45am
Watched an MVP PGI from yesterday that I missed. 
Got distracted by;
dirty dishes, cats fighting, HAM hobby antenna research, smelly garbage can, full recycling can, making list of items for another project from the hardware store, checking email (each time a notification popped up), checking LinkedIn, checking Twitter, checking Facebook.
I listened to the recording and viewed the visuals for the part I really wanted to see.

8:45am
Could really use a latte, warm, frothy, 15 minute drive - um, no.
Eventually feeling parched.  Looking for something cold and carbonated.  Hmm, no cold beverage fridge.
Tea it is I guess.

9:30am
Made notes of other follow up items I needed to do with HR.
Returned call to HR from yesterday (they called while I was in the middle of the only thing I had scheduled all day) - ring, ring, ring, disconnect.
I think, that was curious - I will try again later, she may have been on the phone with someone.

9:45 am
Phone rings - its Cabo.  (Mexico) they want to know when I can come visit.  Um, no...

10:00am
Start typing this

10:13am
All caught up
Checked email, read an article the wife sent, investigated LinkedIn Premium, looked around thinking "what next"

10:30am
Yea! spam to delete

And a little after that, my entire attempt at writing a humorous blog post was ruined.
(the following falls into a category that I won't name.  The comments are not disparagement to my former employer.)

I finally got hold of my HR representative, all was fine, my questions were answered.
Then I was asked a few questions (that I had already answered).
They wanted to know the accounts that I was using to access particular resources so they could be deactivated.  No problem.  I gave the account names.
Then I was asked for the passwords to those accounts.  Um.  No.  Just no.
My GitHub account,  They wanted the password to my GitHub account.  No, I never had a 'corporate' GitHub account.  They can just remove my account from the 'company' in GitHub.
And besides, I have access to other repositories that have nothing to do with my former employer. How can I trust the people that I am giving access to _my_ account?

Right there, any groove I had at writing humor was ruined.

So, I simply stepped away from everything for the remainder of the day.  And focused on other projects around the house.
Now, I still can't get this incident out of my head.  It is 5pm and I have to finish this post.

It is Friday.  Monday will bring a new adventure.  And a new post.
That one will actually be technical, and very useful to many folks.




Thursday, October 5, 2017

Day one as a free agent - looking back

tl;dr
This is my therapy for working through the emotions of being RIFed this is not any commentary on my previous employer.
As I open up about my experience, I hope to be helpful to others in at least letting you know you are not alone in your experience.
This is me, pretty raw.

Thursday, October 5 2017 6am

I thought I was doing pretty well this morning.
Then I saw a ping from a long time co-worker through LinkedIn.  That was okay.
It was when my phone reminded me that it was time to go to work...  That really stirred up the emotions.

It suddenly dawned on me that I have not bee out of work for 30 years.  20 of those years in the IT industry.  The changes I have seen and been part of in some way.  It is crazy.
But, it is this that makes the emotion - the abrupt loss of comrades.  Folks I have worked on projects with, suffered with, celebrated with, tackled big ideas and problems with.
The forced end of time on the office really touches this emotional well of feelings.  This is where the feelings come from, very guttural and powerful.

I have always been a person that was broad across a number of technologies.  It gave me a valuable wide angle lens; the systems view of IT.  The dependencies, connections, combinations and touch points.  How this impacted that and so on.
I have worked with a number of younger folks that lack this view, or approached another way; lack the experience to have this view.

I have long had two statements for every manger I have worked for:
  1. Keep me relevant
  2. My job is to make you look good.  Your job is to be my shit screen.
Keep me relevant - that has always been important.  It is my way of expressing that I want to grow and I want to be involved in the company growing, in one simple statement.

My job is to make you look good - that is one that some folks have had a hard time with when I mention it.  It is me appealing to something that my manager needs, he / she needs successful people and a strong team.  That makes them look good, and keeps them relevant and valuable.

It is all a synergy of feedback loops.  And quite honestly, these simple statements of relationship I think have been very powerful in my past success.  Doors have been opened for me, and I have been allowed to organically take and make opportunities as a result.

I cannot be more grateful to my last manager (who ended up being my neighbor (that was strange for a while)).  He saw something in me and harnessed it, supported it, opened doors, and allowed me to just go.  It was great.
I did not fall into the 4 years and I am bored trap.  The work stayed interesting and challenging.  And that is so incredibly important.

I also found a mentor for a couple years in there.  Not with my former employer though.  He helped me realize many things and to envision others.  That is a relationship that I need to renew, without the encumberment of the employer relationship.

But in writing this one thing has occurred to me.
While I left behind lots of valuable works, great ideas, and intellectual property - they can't keep what is in my head.  I still have ideas, I still have knowledge, I still have worth and value.  All of those experiences - those belong to me and not to my former employer.
That is my worth as I look back to figure out how to look forward.

I have carried with me a couple office artifacts for many years now.  One a cover from an Internet magazine long gone (not an online magazine, a magazine about the business of the Internet), the second a Calvin and Hobbs cartoon.

http://www.gocomics.com/calvinandhobbes/2013/10/17

Right now I am listening to the Passacaglia and Fugue in C minor which should only be played on a pipe organ, and the best version I have every heard was recording by Virgil Fox at the Fillmore East.  C minor is the umami of musical keys.  It is earthy, rich, flavorful.
I have listened to this piece for years, generally as loud as my speakers can tolerate without distortion. It is 15 minutes that always helps me clear my head and release emotional tension.

Today, I am posting early.  I have some resources to check out, and I am going to spend the afternoon with my tattoo artist, finishing the work he started a few months ago.
Nothing more relaxing than some time under the needle.

Being laid off sucks

tl:dr
There was a substantial RIF yesterday
Being RIF'ed sucks
Yea, I am okay.  And this blogging is therapeutic.
No, this is not sour grapes, and I am not disparaging my previous employer in any way.  Please don't take any comments in that way, that is not the intent.
That is your warning, read on if you like.

Wednesday, October 4, 2017 6am

My mind has been buzzing in a thousand different directions lately.  My team and I have been working under rumors of 'cost reductions', and our work site has appeared to be one of the targets.

Quite frankly, the entire company has been on edge for two weeks now.  Internal email volume has reduced to a trickle, chatter on Slack has trickled down to only the really critical questions or help.  Really obvious that most everyone at this point in time knows that something is up.

I am beginning this story in the morning of the 'big day'.  My brain has been busy half the night working on this, and it  just needs to get out of my head.

I wonder how many on my team are going to wear a red shirt into work today, as I have...

Needless to say, my emotions are mixed at this point.  The one upside of rumors is that once the threads start to come together, it helps you move through the stages of grief.  And the meeting invitation that many of us received I hope will be a relief, since the anticipation can stop and reality will be known.

I can say this, no matter the face you put on this; it really is emotional.  It is really easy to feel depressed and to feel unvalued.
I honestly didn't think that writing this would be as difficult as it is seeming to be.  But I am at the point of letting go, of what I am not clear.  And I think that is the struggle.

I have worked at my current company and office for 10+ years.  I have made friends, worked with some incredibly smart people, worked on some incredibly cool and innovative projects.  I have nothing to regret for my work, or the experience I have gained. 

So many things that I have been involved in, that I could not share, could not talk about to anyone other than my team.
Until earlier this year I was in a research team.  We were always forward looking, strategic in our projects, and very early in our efforts.
Changes were made and that group was dissolved and we became a more traditional development team.  Definitely different work.

For me, I was finally able to work on my one of my passions; customer success.  That was great.  What was not great was the internal struggles due to the way the business processes, internal feedback, and internal silos reinforced thinking.  This frustration of my position I will not miss.
And I have to say, 'speaking' that frustration is relieving.  But I don't want this to be about sour grapes.  It really isn't.

I wanted this to be about moving through and moving on.  This is the first time I have been on this side of a layoff.
I have been one of the lucky ones to remain behind multiple times, both in a leadership position and as an independent contributor.  That is not simple, that is emotional and disruptive as well.

I have to look at this as the kick in the butt to remake myself (again).
This would not be the first professional shift in my life.  I have remade myself many times, and risen to the occasion each time.  Then it is always the question of "what's next?"

This time it is different, the first question in my head is "now what?"  and I have to consciously place that aside and ask "what's next?"
That is what I need to focus on and simply think about what excites me, what challenges me, what can highly engage me for the next 10 years.

Now, I am going to take a pause from writing, head into work, and do what a team does as we wait for the meeting that outlines our fate.  Nothing anxious about that at all....  :-S

Wednesday, October 4, 2017  12pm

The message has been delivered.
I have had a chance to talk to HR to clarify some questions about the severance.
There is a strange feeling of relief.  I am simply pretty numb to the whole thing.
Strange.
Standing around talking with my co-workers that have been tasked with escorting us out.  What a sucky task.  Being a survivor of these things in the past, not a great mental place to put the remaining folks in.

And that's it.
Move on, go away.  Bye.
That is it.
That is the feeling.
Have I said it is kind of surreal?

A few of us retired for the afternoon to a local business to have lunch, a couple beers, and play Dungeons and Dragons for a few hours.
That was a good distraction.

That is all for now, more tomorrow.  As I am sure there will be more tomorrow.  And as I mentioned, this is therapeutic.




Monday, July 31, 2017

Isolating Citrix Cloud in your Azure Tenant

I have recently been studying issues that customers are having when trying to stand up a proof-of-concept environment for Citrix Cloud in Azure.

Most of these customers are standing up the full XenApp and XenDesktop Service.  However, our Citrix Cloud Services all have the same basic needs for any customer:
  1. Azure Subscription (for workers and infrastructure)
  2. App Registration (this is an Azure Tenant service account for our cloud based control plane to perform worker lifecycle events within a subscription)
  3. Virtual Network (the machines need IP addresses)
  4. Active Directory (there is a much larger discussion here, but either a read / write Domain Controller VM  or the Azure Active Directory Domain Service will work)
  5. The DNS setting for the Virtual Network must be your Active Directory 
  6. Cloud Connector machines (the connection between the machines in the subscription and the control plane)
  7. Some type of 'golden' image that is provisioned into the worker machines your end customers get their work done on.

Growing this conversation from the bottom up;

Each customer of Azure has at least one Azure Tenant.
This is your account in Azure.  It is the highest level of connection between Azure and you the customer.
Within your Azure Tenant you have Subscriptions.
Subscriptions are billing boundaries and service boundaries (services within subscription cannot 'talk' to each other without extra work, as if they are in different buildings).

Isolating Citrix Cloud in your tenant;


Can you isolate Citrix Cloud to its own Subscription in your Azure Tenant?  Yes!  And that is actually the topology that I am going to describe here.  How to isolate Citrix Cloud from your corporate infrastructure.

Common project slow down points that I have heard are:  modifications to existing virtual networks and protecting Active Directory.  

Focusing on the Virtual Network issue first;

You CAN create a virtual network dedicated to your Citrix Cloud deployment. 
The important things to remember are:
  • You need a route to your Active Directory
  • You must update the DNS settings of the Citrix Cloud virtual network to be the AD
The DNS setting is the most common place where customers trip up.  The DNS setting must be set. The Azure default results in the machines not being able to resolve the Active Directory.

The three models as pictures;

It is often that pictures tell a story faster and easier, I wanted to provide those to get you started thinking about your individual topology as well.

If your Active Directory is on the same Virtual Network you are most likely golden.

If your Active Directory machine(s) is on a different Virtual Network in the same subscription, you can use peering between the two virtual networks.

If your Active Directory machine(s) is on a different Virtual Network in a different subscription, you must use a gateway between the two virtual networks.

Friday, July 28, 2017

Virtual Network permissions for Citrix Cloud

In a previous post I covered how to manually create a Service Principal (App registration) for XenDesktop Essentials.  (this also applies to the XenApp and XenDesktop Service)

If you recall, this is the identity that Citrix Cloud will be using when it performs machine lifecycle actions in your Azure Subscription.

Things with permissions can get a bit strange in Azure pretty quickly.  One such area is Virtual Networks.

First of all, a Virtual Network exists within a Subscription.  It can belong to any Resource Group for management, but can be used by any machines or services within the subscription.

Now, in the world of assumptions, this is all fine and easy if you grant the Service Principal account the Contributor role AND the resource group that your virtual network belongs to is within a resource group under that same subscription.  You can take advantage of the inheritance.

This is not always the case.  In fact, it might not be the case for you at all.  You might be putting very tight controls on that Virtual Network to ensure it never gets messed up.

The minimum permissions that the Service Principal needs to your Virtual Network is the VM Contributor Role.  This level of access is necessary for the automated provisioning and lifecycle of desktop or session workers.

If you have a need to grant access to your Virtual Network or want to constrain access to your virtual network, here is how.

Remove the inheritance at the Virtual Network Resource Group from the subscription if it is enabled.
Explicitly grant the App Registration the VM Contributor role on the Virtual Network where worker machines will be attached when provisioned.

You can find more about the permissions in this article that I authored:  Manually granting Citrix Cloud Access to your Azure Subscription

Thursday, July 27, 2017

Azure Resource Manager Templates for Citrix Cloud workloads

At Citrix we recognize that different customers need different tools to accomplish their goals.  In the end, it is all about selecting the right tools for your environment and business processes to get you moving forward in an efficient way.

It has been brought to our attention that getting started in Azure with Citrix Cloud is not necessarily as straightforward as it needs to be, especially when customers go it alone (without the aid of a sales engineer or an integrator).

You will be seeing different tools, recommendations, updated documentation, and product enhancements to help get you (the customer) moving forward with your demonstration project, that Proof-of-Concept project, and moving into full production.

One of those tools was recently mentioned on this blog: Citrix Cloud XenDesktop Resource Location ARM Template

Without modification of the template this Azure Resource Manager template is focused on getting you up an going with that very first Demo environment.
It provides everything from an Active Directory Domain to NetScaler VPX.  And the glue in between to make it all work.

Additionally, there are other Azure Resource Manager templates that are componentized to support you in building out the infrastructure in your own way or integrating with your current Azure environment for any of the Citrix Cloud offerings.

These are being built to bring success to your Proof-of-Concept and production deployments. You can find the PoC and Production template repository here: https://github.com/citrix/CitrixCloud-ARMTemplates

This is a community repository and we would love to see your additions and suggestions.

I would also like to hear your stories and questions about using Azure to deploy your Citrix Cloud service, whether it be XenApp Essentials, XenDesktop Essentials, or XenApp and XenDesktop Service. 

Lets make it better together.


Friday, April 14, 2017

Getting started with Citrix Essentials on Azure


Earlier this month two Citrix Essentials products hit the Azure Marketplace;
XenApp Essentials and XenDesktop Essentials. https://www.citrix.com/blogs/2017/04/03/xendesktop-essentials-xenapp-essentials-now-available-in-azure-marketplace/

In this short period of time, there has been  customers who have purchased the services or are kicking the tires.
While I didn't give a number, I can say that it has been a pretty exciting first two weeks, and the interest from customers has been great.  Really great.

Both Essentials offerings run on Azure (exclusively) and are managed through Citrix Cloud.

Since these are new services, the documentation is constantly coming on-line. Here are some references that should get you over the initial hurdles of understanding how to implement it all.

The newly updated XenDesktop Essentials guide: http://docs.citrix.com/en-us/citrix-cloud/xenapp-and-xendesktop-service/xendesktop-essentials.html

If you were wondering if you could take advantage of Azure Active Directory Domain Service? Yes, you can: https://www.citrix.com/blogs/2017/04/11/xenapp-xendesktop-services-support-azure-ad-domain-services/

If you are in a hybrid cloud scenario (user workers in Azure with a VPN back to the datacenter where they need Kerberos or Windows pass-through authentication) you will need to setup up an Active Directory replica server in Azure: https://docs.microsoft.com/en-us/azure/active-directory/active-directory-install-replica-active-directory-domain-controller

Are you setting up Windows Desktops in Azure? 
Be aware that you must implement Azure Active Directory. More details on the way.

And take advantage of the Hybrid Use Benefit Windows 10 image in the Azure Gallery if you create a new golden desktop image.

Friday, April 7, 2017

Active Directory with XenDesktop Essentials in Azure

XenDesktop Essentials and XenApp Essentials have hit the Azure Marketplace, and they are catching on.

For those of you that remember Azure RemoteApp, XenApp Essentials is the replacement for that.  And for those of you that want to give Windows Client desktops to your user-base, XenDesktop Essentials is for that.
And, for those that want it all, There is XenApp and XenDesktop Service.  Which offers it all.

Now, the reason for my post.  Active Directory and Azure Active Directory.
There is a requirement of all of these solutions that the provisioned machines are joined to a domain.  This is where I see many folks getting confused between all of the various Active Directory options.

In reality, there are only two models that will work today (at the date of this post).  Let me describe them in terms of what you need to accomplish.

In both models, you have the user side running in Azure.  Whether that be XenApp Servers (Terminal Servers for you really old folks) or Desktops (Windows Client or Windows Server desktops).

Your answer to this next question defines the path that you need to head down.

Do your Azure based user sessions need to access resources in some other cloud / datacenter?
A different way to ask this - do you need a VPN between your users in Azure and whatever other resources they need to access in some other cloud / datacenter.

If your answer was no
Then I am calling you 'cloud born' or 'Azure based'.
Knowing this you can use Azure AD plus Azure Active Directory Domain Service.

AD Sync is built in, and most likely Azure AD is your source for users.  But you need the additional service to support domain join, group policy, and those traditional things that Active Directory provides.

I personally love the following guide for getting AADDS all up and running: Azure Active Directory Domain Services for Beginners

The trick here is that you need to use FQDNs for domain joins and domain references.  If you customize your Azure AD domain, use that.  If you don't it is YourDomain.onmicrosoft.com.

When you need to add Group Policy to lock things down; https://docs.microsoft.com/en-us/azure/active-directory-domain-services/active-directory-ds-admin-guide-administer-group-policy

If your answer was yes;
Then you are more of a 'traditional' enterprise that is in some hybrid deployment model.
Knowing this you need to use Azure AD plus Active Directory.

You will need to enable AD Sync, you will need to establish a replica domain controller in Azure, and you (probably) already have a VPN between your datacenter and Azure virtual network.

The replica domain controller in Azure: https://docs.microsoft.com/en-us/azure/active-directory/active-directory-install-replica-active-directory-domain-controller
Active Directory Sync / Connect to Azure AD: https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect
(It does not matter where you install / run that, just that you do).

In both cases; Don't forget to update the DNS settings of your Virtual Network with these new machine IP addresses.


Wednesday, April 5, 2017

Manually creating a Service Principal for XenDesktop Essentials

I have been looking at the customer experience around XenDesktop Essentials lately, and I have helped a few customers with issues around defining their Service Principal accounts.

Backing up a bit.  What is this 'Service Principal' account and what is it used for?

The Service Principal is the username / secret that is used by Citrix Cloud to talk to the Azure API and perform machine lifecycle actions in your Azure Subscription.

You could call it a delegated user, or an application user, or simply an application account.
The Service Principal is not a new concept in the enterprise world.  In my background we always created very restricted user accounts for use by applications, granting only those permissions that were necessary for the application to perform its functions.

I know there is guidance on using various PowerShell scripts to do this.  But quite honestly, it is so few clicks in the Azure Portal, you might as well do it there.  Far less hassle than installing the Azure cmdlets.

Plus - by doing it this way, you can quickly identify if you have the permissions necessary, and get it fixed or pass the responsibility to the person that can do it.

First, login to the Azure Account that 'Citrix' will be deploying workstations to.
Next make sure that you have a subscription container for the 'Citrix stuff' and a Virtual Network for the workstations to use all ready to go.


Create the App Registration / Service Principal
  1. Select the Azure Active Directory blade in the Azure Account
  2. Select 'App registrations'
  3. Select 'Add +'
  4. Enter a name, leave the application type as web app / API, and enter a Sign-on URL such as 'https://localhost/xde'
  5. Select Create
Grant it permission to interact with the Azure API for your account
  1. Once the registration is created, select it to view its settings
  2. Select 'Required permissions'
  3. Select 'Windows Azure Active Directory'
  4. Select 'Sign in and read user profile' and
  5. Select 'Read all users' basic profiles'
  6. Select 'Save'
  7. Select Add, Select an API, Select 'Windows Azure Service Management API', Select 'Select'
  8. Select 'Access Azure Service Management as organization users'
  9. Select 'Select'
  10. Select 'Done'
Add a Key (the secret)
  1. In the Settings, Select 'Keys'
  2. Enter a Key description, select a duration
  3. Select 'Save'
  4. Copy the Value of the key  (this value is necessary when this Service Principal is used with Citrix Cloud - and there are warnings that you can never see this key again)
Grant the Service Principal access to the Subscription for 'Citrix stuff'
  1. Select the Billing Blade
  2. Select the Subscription that you would like Citrix Cloud to be using
  3. Select 'Access control'
  4. Select '+ Add'
  5. Under 'Role' select 'Contributor'
  6. Under Select, type in the name of the App Registration you created (mine was 'xendesktop')
  7. Select the Azure AD user
  8. Select 'Save'
At this point in time, the Service Principal information can be handed off to your Citrix Administrator for establishing the Host connection to Azure in the Citrix Cloud portal.  
When Adding the Connection select the 'Use existing' option.

They will need;
  • the Subscription UUID
  • the Active Directory ID
  • the Application ID
  • the Application secret (that value that I mentioned you had to copy and save)
If you return the Azure Active Directory blade, Select the Properties, you will find the Directory ID.
Then select App registrations, select the one you created you can find the Application ID.
The Subscription id, is back under the Billing blade.


Tuesday, February 7, 2017

A reason to use state with Octoblu

I have been posting an 8 part series going though some advanced use of Octoblu.

Part 1: Use configuration events in Octoblu
Part 2: Creating custom devices in Octoblu
Part 3: Setting the state of an Octoblu device from a flow
Part 4: Listening to and acting on device state change in Octoblu
Part 5: Breaking values into new keys with a function node
Part 6: Reformatting nested JSON with JavaScript
Part 7: Logical data nesting with your Octoblu state device
Part 8:

Back at the beginning I introduced the concept of a state device.

Now, if you aren't yet understanding why I might introduce a state device, consider this:


Have you ever found yourself using SetKey and GetKey within flows to persist data, even if only for a little while?

Have you ever run into complex timing issues where you would love to break something into multiple flows and instead end up with one huge complex one?


This is where the state device is an easy fit.  Persist your data, in a common object that you can reference between flows.

Then, instead of relying on some message chugging through the system, you act upon a change to your state device.  So you could dev null some message, perform no update and exit your logic stream.

In the example I have been laying out I have two primary scenarios: 

Scenario 1: there are multiple incoming data sources
I have multiple devices that are all similar and they are feeding in data that I need to evaluate in a common way.  Each flow can update my state device independently, and then I simply have one evaluation flow to determine if I am going to send out my alert.

Scenario 2: there are multiple data listener paths
Just the opposite.  I have one primary input data source, it is big and complex.
Then I have multiple flows, each of which evaluates a specific type of data or specific properties.

Either way, it allows me to compartmentalize my flow logic and reduce / remove redundancy across the system.

So I end up with something like this:
Combined with this:
To do what I was doing in the first screenshot.

The big upside for me is that I removed all of the hardcoded naming filtering that I started with in order to persist the data.
The flows are now able to be more dynamic and handle the same sets of data no matter if it was mine, or someone else's.






Monday, February 6, 2017

Referencing nested array values in JavaScript from my Octoblu state device

Part 1: Use configuration events in Octoblu
Part 2: Creating custom devices in Octoblu
Part 3: Setting the state of an Octoblu device from a flow
Part 4: Listening to and acting on device state change in Octoblu
Part 5: Breaking values into new keys with a function node
Part 6: Reformatting nested JSON with JavaScript
Part 7: Logical data nesting with your Octoblu state device

Okay, here is the big post that I have spent an entire week working up to.

I have to admit, I don't write code every day and am self taught JavaScript (along with Python,  PowerShell, and batch) so this took me a while to work through.

From my last post, my incoming message looks this this:

{
  "msg": {
    "rooms": {
      "Redmond": {
        "lunch": {
          "motion": {
            "name": "Redmond_lunch_motion",
            "mapTitle": "Redmond",
            "room": "lunch",
            "device": "motion",
          },
          "refrigerator": {
            "name": "Redmond_lunch_refrigerator",
            "mapTitle": "Redmond",
            "room": "lunch",
            "device": "refrigerator",
          },
          "door": {
            "name": "Redmond_lunch_door",
            "mapTitle": "Redmond",
            "room": "lunch",
            "device": "door",
          }
        }
      }
    },
    "fromUuid": "d5b77d9b-aaf3-f089a7096ee0"
  },
  "node": "b5149300-9cbd-1f1b56e5d7bb"
}

There can be a variable number of devices per room, and a variable number of rooms per map, and a variable number of maps.  My nesting pattern above is rooms.map.room.devices

Now for the hard part.
I want to evaluate the differences between values of different devices, per room.
This ends up being a lesson in how values are referenced in arrays in JavaScript.

Before I move forward, I have abbreviated the above JSON to spare you scrolling.  There are additional fields, and these additional fields contain Date objects that I am interested in.  And these Dates are formatted as odd numbers which are actually Epoch Time.

So, to give you the full treatment, here is a real message:

{
  "msg": {
    "rooms": {
      "Redmond": {
        "lunch": {
          "motion": {
            "name": "Redmond_lunch_motion",
            "mapTitle": "Redmond",
            "room": "lunch",
            "device": "motion",
            "motion": false,
            "motion_updated_at": 1485539337.480442,
            "battery": 1,
            "battery_updated_at": 1485539337.480442,
            "tamper_detected": null,
            "tamper_detected_updated_at": null,
            "temperature": 21.666666666666668,
            "temperature_updated_at": 1485539337.480442,
            "motion_true": "N/A",
            "motion_true_updated_at": 1485539125.483463,
            "tamper_detected_true": null,
            "tamper_detected_true_updated_at": null,
            "connection": true,
            "connection_updated_at": 1485539337.480442,
            "agent_session_id": null,
            "agent_session_id_updated_at": null,
            "connection_changed_at": 1485175984.3230183,
            "motion_changed_at": 1485539337.480442,
            "motion_true_changed_at": 1485539125.483463,
            "temperature_changed_at": 1485529054.5705206
          },
          "refrigerator": {
            "name": "Redmond_lunch_refrigerator",
            "mapTitle": "Redmond",
            "room": "lunch",
            "device": "refrigerator",
            "opened": false,
            "opened_updated_at": 1485539969.6240845,
            "tamper_detected": false,
            "tamper_detected_updated_at": 1476739884.682764,
            "battery": 1,
            "battery_updated_at": 1485539969.6240845,
            "tamper_detected_true": "N/A",
            "tamper_detected_true_updated_at": 1476739866.2962902,
            "connection": true,
            "connection_updated_at": 1485539969.6240845,
            "agent_session_id": null,
            "agent_session_id_updated_at": null,
            "opened_changed_at": 1485539969.6240845
          },
          "door": {
            "name": "Redmond_lunch_door",
            "mapTitle": "Redmond",
            "room": "lunch",
            "device": "door",
            "opened": false,
            "opened_updated_at": 1485538007.9089093,
            "tamper_detected": null,
            "tamper_detected_updated_at": null,
            "battery": 1,
            "battery_updated_at": 1485538007.9089093,
            "tamper_detected_true": null,
            "tamper_detected_true_updated_at": null,
            "connection": true,
            "connection_updated_at": 1485538007.9089093,
            "agent_session_id": null,
            "agent_session_id_updated_at": null,
            "opened_changed_at": 1485538007.9089093
          }
        }
      }
    },
    "fromUuid": "d5b77d9b-aaf3-f089a7096ee0"
  },
  "node": "b5149300-9cbd-1f1b56e5d7bb"
}

Now, the output I am looking for is to take some of these sensor Date values and evaluate them between each of the three devices.  Such as: door-refrigerator, motion-door, motion-refrigerator and so on.

If these values were in the same part of the message, it would be really easy.  I could simply dot reverence the values and do the math.
But they are not.  Each sensor is in its own document, in an array.

Now, if you recall a few posts back, I have a naming convention and I am standardizing three of the names:  "door", "refrigerator", and "motion".  Those I am not allowing to change.  But the room can and the map can.

Recall, I began this exercise this with just an array of devices with values.  Processed them to group by a logical naming pattern, saved that to an Octoblu state device, and now I am further processing that into my actionable data which I can easily handle with Octoblu filters to handle alerting or whatever I want to do.

So, to get you to read to the end and not just steal my code here is the output that I am producing, per room.
This gives me a nice single document per room as output - I can pass that to a demultiplex node to break the rot array apart and evaluate each document. 

My output looks like this:

{
  "msg": [
    {
      "motion": "motion",
      "motionAt": 1485544607.3195794,
      "motionAtHuman": "2017-01-27T19:16:47.319Z",
      "mapTitle": "Redmond",
      "room": "lunch",
      "refrigerator": "refrigerator",
      "fridgeOpenedAt": 1485539969.6240845,
      "fridgeOpenedAtHuman": "2017-01-27T17:59:29.624Z",
      "door": "door",
      "doorOpenedAt": 1485538007.9089093,
      "doorOpenedAtHuman": "2017-01-27T17:26:47.908Z",
      "diffDoorsOpenedMinutes": 32,
      "diffDoorMotionMinutes": 109,
      "diffRefrigeratorMotionMinutes": 77,
      "sinceDoorOpenMinutes": 115,
      "sinceRefrigeratorOpenMinutes": 82,
      "sinceMotionMinutes": 5
    }
  ],
  "node": "98cb8680-a264-1b8483214e06"
}

Now, to end this long, long story the JavaScript is below.
What I tried to do was have an intuitive way to read the code and reference each level of the document arrays, so you could understand where you were in the hierarchy.

// array to output
var output = [];
for ( var map in (msg.rooms) ){
    for ( var room in msg.rooms[map] ){
        var doorOpenedAt;
        var fridgeOpenedAt;
        var motionAt;
        var roomOutput = {};
        for ( var sensor in msg.rooms[map][room]){
            switch ( msg.rooms[map][room][sensor].device ) {
                case "door":
                    doorOpenedAt = moment.unix(msg.rooms[map][room][sensor].opened_changed_at);
                    roomOutput.door = msg.rooms[map][room][sensor].device;
                    roomOutput.doorOpenedAt = msg.rooms[map][room][sensor].opened_changed_at;
                    roomOutput.doorOpenedAtHuman = doorOpenedAt;
                    break;
                case "refrigerator":
                    fridgeOpenedAt = moment.unix(msg.rooms[map][room][sensor].opened_changed_at);
                    roomOutput.refrigerator = msg.rooms[map][room][sensor].device;
                    roomOutput.fridgeOpenedAt = msg.rooms[map][room][sensor].opened_changed_at;
                    roomOutput.fridgeOpenedAtHuman = fridgeOpenedAt;
                    break;
                case "motion":
                    motionAt = moment.unix(msg.rooms[map][room][sensor].motion_true_changed_at);
                    roomOutput.motion = msg.rooms[map][room][sensor].device;
                    roomOutput.motionAt = msg.rooms[map][room][sensor].motion_true_changed_at;
                    roomOutput.motionAtHuman = motionAt;
                    break;
            } // close of switch
            roomOutput.mapTitle = msg.rooms[map][room][sensor].mapTitle;
            roomOutput.room = msg.rooms[map][room][sensor].room;
        }  // close of sensor
        roomOutput.diffDoorsOpenedMinutes = Math.abs(doorOpenedAt.diff(fridgeOpenedAt, 'minutes'));  //removing Math.abs will give a + - if the refrigerator opens and the door does not it will be negative
        roomOutput.diffDoorMotionMinutes = Math.abs(doorOpenedAt.diff(motionAt, 'minutes'));
        roomOutput.diffRefrigeratorMotionMinutes = Math.abs(fridgeOpenedAt.diff(motionAt, 'minutes'));
        roomOutput.sinceDoorOpenMinutes = moment().diff(doorOpenedAt, 'minutes');
        roomOutput.sinceRefrigeratorOpenMinutes = moment().diff(fridgeOpenedAt, 'minutes');
        roomOutput.sinceMotionMinutes = moment().diff(motionAt, 'minutes');       
        output.push(roomOutput);
    }  //close of room
} // close of map
return output;


Lots of leading up to this post.  But I like to expand folks' understanding along the way.
And I know we don't all tolerate long articles.

I can thank Tobias Kreidl for even getting me started on this series of posts.
He asked a simple question, and I had a final answer, but I wanted to tell the journey so that he understood how I got to where I did.
That leaves it up to you to take what you need.  That's just how I write and respond to questions.

Friday, February 3, 2017

Logical data nesting with your Octoblu state device

Part 1: Use configuration events in Octoblu
Part 2: Creating custom devices in Octoblu
Part 3: Setting the state of an Octoblu device from a flow
Part 4: Listening to and acting on device state change in Octoblu
Part 5: Breaking values into new keys with a function node
Part 6: Reformatting nested JSON with JavaScript

In my last post I left you with some JavaScript to reformat a JSON message and come out with a nice new format.

I left you hanging with my key format though.
Why did I format my key names the way I did?

It is actually pretty simple in concept (but it took me a long time to get all the code right).

Previously I mentioned that after I $set the data on my Octoblu state device, I want to catch that data change in another workflow.
And I also mentioned that logically grouping that data would make it easier to visualize and work with farther down the chain.

So, back to the key name pattern in my output:

{
  "msg": {
   "rooms.Redmond.lunch.motion": {
     "name": "Redmond_lunch_motion",
     "mapTitle": "Redmond",
     "room": "lunch",
     "device": "motion",
   },
   "rooms.Redmond.lunch.refrigerator": {
     "name": "Redmond_lunch_refrigerator",
     "mapTitle": "Redmond",
     "room": "lunch",
     "device": "refrigerator",
   },
   "rooms.Redmond.lunch.door": {
     "name": "Redmond_lunch_door",
     "mapTitle": "Redmond",
     "room": "lunch",
     "device": "door",
    }
  },
  "node": "e271a6c0-9f9b-8d7882b7836a"
}


This is all about the $set.
I want each set of devices grouped under their room which is under the map they correspond to.
Thus: rooms.map.roomName.deviceName

Which is passed to a JSON Template Node in Octoblu which very simply looks like this:

{
 "$set": {{msg}}
}


Here is where there are different patterns for referencing the message values in Octoblu.
If you are referencing a blob don't put quotes around the mustache notation like I did above.
If you are referencing a value, then put double quotes around the value like this:  "rooms.{{msg.name}}"

The hard thing to get right is the quotes.  Since you will get a false message that your JSON is improperly formatted from the editor, when the message that comes out is actually totally right.

Now, back to why I had the dot notation key name.

When I listen to my state device for a change I will get this nice hierarchy as the output.  And I persist my data nice and logically.

{
  "msg": {
    "rooms": {
      "Redmond": {
        "lunch": {
          "motion": {
            "name": "Redmond_lunch_motion",
            "mapTitle": "Redmond",
            "room": "lunch",
            "device": "motion",
          },
          "refrigerator": {
            "name": "Redmond_lunch_refrigerator",
            "mapTitle": "Redmond",
            "room": "lunch",
            "device": "refrigerator",
          },
          "door": {
            "name": "Redmond_lunch_door",
            "mapTitle": "Redmond",
            "room": "lunch",
            "device": "door",
          }
        }
      }
    },
    "fromUuid": "d5b77d9b-aaf3-f089a7096ee0"
  },
  "node": "b5149300-9cbd-1f1b56e5d7bb"
}


Next post:  I am going to parse all that nesting back apart and make yet another message


Thursday, February 2, 2017

Reformatting nested JSON with JavaScript

Part 1: Use configuration events in Octoblu
Part 2: Creating custom devices in Octoblu
Part 3: Setting the state of an Octoblu device from a flow
Part 4: Listening to and acting on device state change in Octoblu
Part 5: Breaking values into new keys with a function node

In my last example for an Octoblu Function node I very simply took the string value from a key and using .split() broke it into new fields.

This had the dependency of a format convention for the string.

Now, what if I had an incoming array which contained that data. 
And I only wanted to select certain values within each array element, and I had additional data which was nested within another array that I want to bring up a level to make it easier to evaluate later on.

To describe this differently, lets look at (an abbreviated version of) my incoming message:

{
  "msg": {
    "data": [
      {
        "uuid": "3b8a7529-b0f1-ddba9dc4cc27",
        "desired_state": {
          "pairing_mode": null
        },
        "last_reading": {
          "connection": true,
          "connection_updated_at": 1480529975.6920671,
        },
        "hub_id": "509234",
        "name": "Redmond",
        "locale": "en_us",
        "units": {},
        "created_at": 1476738922,
        "triggers": []
      },
      {
        "last_event": {
          "vibration_occurred_at": null
        },
        "uuid": "4d60c8ad-b6d2-f17c5e4a1192",
        "desired_state": {},
        "last_reading": {
          "motion_changed_at": 1480490895.7546704,
          "motion_true_changed_at": 1480490698.2846074,
          "temperature_changed_at": 1480530247.413451,
          "connection_changed_at": 1480530247.413451
        },
        "name": "Redmond_lunch_door",
        "triggers": []
      },
  },
  "node": "5c0e3d40-bddd-6f97ce016844"
}


I have an incoming message (msg) it has an array (data) of documents.  The data within each document could be different as each is a different device with different capabilities and settings.

From this point I have a couple wants: I need the name information of the sensors (from my previous post), and I need to in-nest the values of last_reading to make it easier to handle down the line.

And, then I want to save this information to my Octoblu device (a few blog posts ago).

Lets just format on the array at this point, I don't want this to get too confusing.

//A document object to hold the sensors per room
var rooms = {};


for ( var i in (msg.data) ){



 var sensor = {}; //an empty document object to populate with new key:values



 sensor.name = msg.data[i].name; //the incoming name

 var dotName = (msg.data[i].name).replace(/_/g,"."); 

   //the name in dot notation instead of underscores (see the next post)


 // break the device name into its descriptors (from the last post)
 var descriptors = (msg.data[i].name).split('_');

 switch(descriptors.length){
  case 3:
   sensor.mapTitle = descriptors[0];
   sensor.room = descriptors[1];
   sensor.device = descriptors[2];
   break;
  case 2:
   sensor.mapTitle = descriptors[0];
   sensor.device = descriptors[1];
   break;
  case 1:
   sensor.device = descriptors[0];
   break;
 }

 // un-nest last_reading to make it easier to handle later on
 var last_reading = msg.data[i].last_reading;
 for ( var reading in last_reading ){
  sensor[reading] = last_reading[reading];
 }

 // only those devices with a room value
 if ( sensor.room ) {
  var room = {};
  dotName = "rooms." + dotName;
  rooms[dotName] = sensor;

  // in the end, I want the devices of a room under the key pattern for that room
 }
}
return rooms;


This is what I get back out:

{
  "msg": {
   "rooms.Redmond.lunch.motion": {
     "name": "Redmond_lunch_motion",
     "mapTitle": "Redmond",
     "room": "lunch",
     "device": "motion",
     "motion": false,
     "motion_updated_at": 1485539337.480442,
     "connection_changed_at": 1485175984.3230183,
     "motion_changed_at": 1485539337.480442,
     "motion_true_changed_at": 1485539125.483463,
     "temperature_changed_at": 1485529054.5705206
   },
   "rooms.Redmond.lunch.refrigerator": {
     "name": "Redmond_lunch_refrigerator",
     "mapTitle": "Redmond",
     "room": "lunch",
     "device": "refrigerator",
     "opened": false,
     "opened_updated_at": 1485539969.6240845,
     "connection_updated_at": 1485539969.6240845,
     "opened_changed_at": 1485539969.6240845
   },
   "rooms.Redmond.lunch.door": {
     "name": "Redmond_lunch_door",
     "mapTitle": "Redmond",
     "room": "lunch",
     "device": "door",
     "opened": false,
     "opened_updated_at": 1485538007.9089093,
     "connection_updated_at": 1485538007.9089093,
     "opened_changed_at": 1485538007.9089093
    }
  },
  "node": "e271a6c0-9f9b-8d7882b7836a"
}


Next post:  How that dot notation key name pattern is useful to me.

Wednesday, February 1, 2017

Breaking a value into new keys with a function node

Part 1: Use configuration events in Octoblu
Part 2: Creating custom devices in Octoblu
Part 3: Setting the state of an Octoblu device from a flow
Part 4: Listening to and acting on device state change in Octoblu

Previously I had set custom keys and then listened for settings changes.

What if I wanted to set an array of key:values, or a larger document.  How might I handle that with Octoblu?

You have two primary options:
  • collect nodes
  • F(x) (function nodes)
Collect Nodes emit when the collection hits its minimum value until they reset when it reaches its maximum value.  And what you get is a collection of what came in to that node.

The key here is that you need to predict or hard code when the size of the collection resets back to zero and it begins again.  That is not always easy.

So I frequently end up leaning on F(x) or Function Nodes - this is arbitrary, linear, JavaScript.  It can't loop, it can't wait.  You must construct it so that it executes as a very quick function taking the incoming message, running, and then outputting whatever you tell it to return.

By default a Function node gives you the line return msg;
Which would output exactly the message (msg) that came in.

If you ant to reference the value of a specific key you simply use dot notation such as return msg.rooms;

Everything you do here, requires that you understand the data in your incoming message.  And for longevity sake, that format does not change.

Many of the methods that you have in JavaScript are at your disposal.  The not so easy part is the debugging.  Because you don't get very good debug detail.  But with a bit of effort many folks can work through that.

Now, for my first example:  I am going to create new key:values from an existing value.

I have established a naming pattern of:  map_room_device

This allows me to name my devices in a structured way and then deal with them in Octoblu without using a large number of filters and hard coded values.  In essence, devices came come and go as long as I stick with my naming convention.

Now, I need to make this naming convention more useful and easier to work with farther down the line as the messages become properties, so I want to make new keys from the name.

var descriptors = (msg.name).split('_');


switch(descriptors.length){
 case 3:
  msg.mapTitle = descriptors[0];
  msg.room = descriptors[1];
  msg.device = descriptors[2];
  break;
 case 2:
  msg.mapTitle = descriptors[0];
  msg.device = descriptors[1];
  break;
 case 1:
  msg.device = descriptors[0];
  break;
}


return msg;


I always have error handling.  And that is why I have three cases.  In case I have a device that does not have a map or a room in the naming pattern.

The output of this is the addition of 1, 2, or 3 key:values to the outgoing message.

Just a simple case of what can be done.

Next, straight to a really big message and reformatting arrays..

Tuesday, January 31, 2017

Listening to and acting on device state change in Octoblu

Part 1: Use configuration events in Octoblu
Part 2: Creating custom devices in Octoblu
Part 3: Setting the state of an Octoblu device from a flow

Now we are on to Part 4 - Responding to state changes

If you are catching up, check out the previous posts.

In this post I am going to listen to state changes of an Octoblu device within a flow and then respond to that somehow.

I have my starter flow from last this that looks like this:


Now, I want to create a second flow, where I am going to simply wait for my key 'rooms' and then process that data.  In this flow I am listening to state changes to myDevice.

To begin with your new flow should look like this:


Turn on 'Use Configuration Events' but you don't need to turn on 'Use Incoming Messages' like you did in the first flow.  In this flow we are only listening, not sending messages to or modifying the properties of (like the previous flow).

Open a second browser window, open your previous flow and click the trigger.
Notice that in the debug of this flow, you get the same debug output.  Because you are listening for changes to the device (in both flows).

Now, add some operator after your device in the new flow, set it and modify your JSON in the first flow to send something you can begin to act on.  Use {{msg.rooms}} to reference my example value, but make your own, set multiples, have fun with it.

Quite honestly, it is that simple.
And what you have built is this special Thing, where you can now save JSON formatted data, and then catch when it changes in some other flow.

one to one, many to one, one to many.....
And this data is all yours, formatted by you.

Next up, some of the screwy ways I have dealt with JSON data.


Monday, January 30, 2017

Setting the state of an Octoblu device from a flow

Building on what I began:

Part 1: Use configuration events in Octoblu
Part 2: Creating custom devices in Octoblu

In this post I am going to set and unset properties and their values on an Octoblu Thing, from a flow.

I am going to use the custom Thing that I created in my previous post, but you can use any Thing that has the option "Use Configuration Events".

Part 3 - Setting properties dynamically from a flow

In the previous post I created a custom Thing. And named it: MyDevice

Now I create a new flow, and I add that Thing to the Flow.


I select the Node (a reference to that thing in this flow) and turn on Use Configuration Events.  Notice the little gear.  This means that the behavior of this node in this flow is now different.

The next advanced thing that I am going to do is to add Trigger and JSON template nodes to define the message that is being sent and I am going to turn on Use Incoming Message on my custom node.

Let me back up here and explain a little..
Use Incoming Message takes whatever message is sent to the node.  If you don't turn this on, your device must have fields that you can set and then you can reference the values of keys using mustache notation or hard code values.

And then I am going to attach a debug node after my Thing.


And this makes a complete message circuit from start to end.

Be careful to pay attention to what you are doing - DO NOT create loops; they are very, very bad.  You will get your account suspended.

Now, to craft some JSON.

This is a very simple JSON body that we put in the template:
{
 "$set": {
  "rooms": "foo"
 }
}

Set the value for the key rooms to foo.

Start the flow, click the Trigger, and look for the Key 'rooms' in the message output.
In fact, explore that message output a bit.  Notice, these are all the settings / properties / state values of your device.

Now, change the JSON.  Change the key name, or change the value and see what happens.

If you want to remove the key and its values, we unset.
{
 "$unset": {
  "rooms": ""
 }
}

The $set and $unset are actually MongoDB commands.  And you can use others, such as $addToSet, $push, etc.  As long as you format your JSON properly.

Now, I have discovered that there are some key names that must be special to Octoblu and if you try to use them, nothing happens.  I am going to mention that in case you run into doing everything right, but nothing changes.
Is there a list of these special keys?  Not that I know of.  I was simply observing behavior...

Next up:  responding to state changes