CAS configuration for G Suite

CAS once upon a time contained a specific integration for G Suite, that is now gone. Setting CAS up to use G Suite is not difficult, but finding the right values isn’t easy, even for a SAML 2 veteran. First, if you are migrating to the “new” CAS configuration, or adding SAML 2 to your G Suite instance, you probably want to request a test domain for G Suite. For higher education this was pretty simple. Just search for it to find Google’s latest documentation, and follow the prompts. Now you can test against a couple of user accounts you create in the test instance without fear of screwing up all of your existing users.

First in the G Suite configuration, you will want to access the “Set up single sign-on (SSO) with a third party IdP”, and in there check “Use a domain specific issuer”. This will change the issuer to instead of, which will make things a lot easier for you to differentiate between your test and prod instance of G Suite.

The Sign-in page URL needs to be for CAS https://<host>/cas/idp/profile/SAML2/Redirect/SSO assuming your CAS context is cas. The certificate needs to be your SAML 2 signing certificate.

Now you will need to generate the metadata to provide to CAS, as G Suite does not provide it. The SAML Developer Tools site is quite helpful in doing this.

  • Entity ID is the issuer, which will be like if you choose to use a domain specific issuer.
  • ACS Endpoint is likely This was determined watching output from SAML Tracer. These values will depend on what your subdomain / domain are for your G Suite instances.
  • Nameid Format: Leave at 1.1 unspecified
  • No need to provide a cert, as this would be Google’s cert, which they don’t have

When you get the generated metadata, you will need to remove the “validUntil” attribute, as it is set to expire very quickly.

From here, you can configure it like you would any other SAML 2 service in CAS. Perhaps the one slight difference is you will need to provide your own URL or put the generated metadata somewhere on disk and reference it that way. Once validated against the test G Suite instance, repeat with production. What changes is the subdomain that is listed everywhere, change that to match what your production subdomain / domain is.

Junk it: Office 365, you can be anyone you want

Note: The following is an intuitively obvious result of default configurations note in the public documentation.

TLDR: Office 365 turns DMARC reject actions into quarantine actions, and then throws messages in Junk folder.

How email works

Most of you have probably used email. And you most likely used it via one of the large email providers like Gmail, Hotmail, Office 365, Yahoo! Mail, or AOL. These environments lock you down as to who you can claim to be in your from address. But that is a (sensible) limitation that these providers have built into their systems. Run your own email system (which isn’t hard to do if your only plan is to abuse it), and you quickly see that you can claim to be anyone you want in the world. Email from addresses (the one you see when you get an email) are as secure as the return address on a piece of mail you get. You can put whatever you want there. Coincidentally, at the moment caller ID is as secure.

This is less than ideal. So specifications have been added on top of the original email specifications that let domain owner assert how their email should be handled.

SPF: Sender Policy Framework

There is a little lie up two paragraphs. The from address you see on email isn’t like the return address on a piece of mail. The from address on email is more like the signature on the paper in the envelope. There is a separate header for where to send the message back to in case it doesn’t work. Kind of like return to sender in physical mail. This the the envelope from, or return path. SPF allows a sending domain to list the IPs (basically servers on the Internet) that are allowed to have a return path back to their domain. In addition, they can assert how strictly a receiving system should interpret a message from an unapproved IP.

Domain owners can say that an unapproved IP should be treated as a SOFTFAIL (soft failure). This is when they aren’t entirely sure which IPs are sending on their behalf. This is useful for transitioning to the next level, which is FAIL. A message that triggers FAIL should be rejected according to spec. This means that the person being sent the message should never have anyway of seeing the message. It should disappear into the ether.

DMARC: Domain-based Message Authentication, Reporting, and Conformance

Most people don’t go through reading return path headers. And there are very legitimate reasons why the return path might be different from the visible from address. A lot of organizations like to use mass mailing products like Constant Contact or Mailchimp. These have the return path set back to the mass mailing company so that they can track the undeliverable address automatically, while the visible from address is from the customer paying them to send the email.

The goal of DMARC is to protect that valuable visible from address. Why does this matter? If an email address claims to be from your bank, you want to be assured that it is from your bank and not an attacker.

DMARC has has three different levels of how to handle failures to the DMARC checks: none, quarantine, reject. None is likely set by sending organizations to get some level of reporting. If a message passes DMARC, that’s a good sign to the receiving email system. If it doesn’t, that is less sure. DMARC offers a mechanism for nice organizations to send a report of what DMARC messages they saw, where they came from, and what the DMARC, SPF, and DKIM status was. This is help for an organization switching to strong levels of DMARC. And you want organizations to switch to strong levels of DMARC.

The quarantine level likely should make the mail be delivered to the quarantine. The reject level once again means that the message should disappear into the ether.

There are two different ways a message can pass DMARC. One is if it passes SPF with an aligned domain. If the visible from address is and the return path is the same domain. That is in alignment, so SPF passing would make DMARC pass. If the return path, that isn’t the same. So even if SPF passes on, DMARC wouldn’t pass. This is also true about messages from Mailchimp and similar.

There is another option called DKIM (DomainKeys Identified Mail) Signatures. With this, parts of messages, including the subject and from address, are cryptographically signed. The receiving system is told what domain signed the message, and can verify the signed values. If the signature is correct, and the signing domain is the same as the from address, then this too passes DMARC.

The problem

There are three problems with how Office 365 handles email using the above specs. First is that while Microsoft requests that DMARC reports are sent to them, so they can better protect their domain, Office 365 will not provide DMARC reports to others. It’s hard to transition your domain to a stricter level of DMARC when your primary destination for email doesn’t provide reports. Yahoo!, Gmail,, and many others helpfully provide DMARC reports.

The slightly more severe problem is there is an extra setting in Office 365 to have SPF FAILs be treated as a sign of strong spam. SPF FAILs should be immediately dropped. That is what the owner of the domain has specifically asked to happen. This can help allow impersonation. However, avoiding SPF failures if you are launching an email attack shouldn’t be hard, as no one visually inspects the return path anyway.

The bad problem is in the TLDR above. Office 365 treats DMARC rejects and quarantines as the same thing. And by default the message only goes to junk. An email admin would need to know to look for these settings and to turn up the protection. And even then, by default the best you can do is get the rejects into the quarantine. I am going to guess that most email admins these days at best don’t understand DMARC, DKIM, and SPF as well as they should, and more likely than aren’t even aware of what they mean.

Microsoft’s docs says that they are loosing up the handling of DMARC failures for your benefit, because things like mailing list usually ruin DMARC. While it is true that this can happen, one can also correctly configure mailing list software to handle this. I would prefer it that if my bank says that an email isn’t sent by them, that I never have to see it.

So, if you are a pentester and your customer is on O365, there is a good chance you can get a message from any domain you want into the target’s junk folder. Microsoft even helpfully provides a way to get into the inbox:

If desired, users can still get these messages in their inbox through these methods: Users add safe senders individually by using their email client.

Improving Video Conferencing Performance

As we head into a fall of remote learning and meetings, some may experience issues with video conferencing not working well from their home. There are several easy things you can do that can help out a lot.

The FCC defines broadband Internet as 25 Mbps (megabits per second) down and 3 Mbps up. Hopefully you have better performance than that from your provider. Just randomly checking on Zoom calls I have been on, it says it is using about 1 Mbps each way while in gallery view and I am sharing my camera. As you can quickly see, it is the upload speed that becomes the limiting factor. Looking at Zoom’s system requirements, I suspect other solutions are going to be in a similar range for requirements. You can see that requirements for audio are much less. So the first way to improve performance is to not share your camera.

You can use multiple different sites to see what your actual bandwidth is. Ookla is a very popular on to do this. What you get and what your provider may be telling you can be two different things. My speed test says seven Mbps up, so I can do three or four video conferences at a time with the camera going and still have some extra room. If you have a lot of users and not much upload, you may need to upgrade your service. But, if you have enough upload room and are experiencing problems, I’m going to guess it is with wireless devices. There are several options for how to improve this. Spending money on a better router / AP (access point) or faster connection from your ISP (Internet service provider) likely isn’t going to solve your problem.


  1. Have no more than one wall between your device and your AP.
  2. Get your AP out in the open, see #1
  3. Plug devices in with a network cable
  4. Buy a mesh network, see #1
  5. Get on 5 GHz

239,000 mile view of wireless

WiFi operates on radio waves in two different bands, 2.4 GHz and 5 GHz. There are several channels in each of those bands to separate devices from each other. The less devices on a channel, the easier it is to communicate.

In this way, it mirrors sound waves and conversations you are used to in everyday life. It is easier to talk to someone in a quiet room than at a KISS concert. However, unlike your home where you can’t hear the conversations of your neighbors, your devices likely can. So your devices have to coordinate for airtime with your neighbor’s devices it can see. When you look at the list of wireless networks on your phone and you see a list that isn’t yours? Well, those may be networks your devices have to coordinate airtime with. A single bad device can use up 70% of the available airtime on a channel without using much bandwidth out to the Internet. It’s like a KISS concern in your neighbor’s back yard, really loud and ultimately not sending much information.

So the goal is to provide the best experience for your devices by reducing the amount of airtime they are using to transmit the data they need. The items list above help do that.

Plug it in

Plugging your devices in with an Ethernet / network cable completely removes them from wireless, and thus can’t experience wireless interference. They also can’t use up airtime. Plugging something like a Roku or TV that you are streaming to into a network cable removes them from the devices that can cause interference. So anything you can plug in, go for it. Network cables are cheap. Getting bad performance playing a multiplayer video game on a XBox, PS4, or Switch? Plug it in, it will drop the ping. Have a laptop that doesn’t have a network port? You can buy USB network adapters for around $18 that work with laptops, including Chromebooks. Have a iPad? You can buy one for that as well that includes a power port so that you can charge while connected. Same with Android devices. You don’t need to be plugged into the network at all times, but it can help.

There are also powerline modems that let you run a wired network through your outlets with adapters. This may be a good solution to avoid cables running down your halls. However, they start to cost money. Also, if you don’t configure correctly, you can possibly connect to your neighbor’s network or expose your network to others.

No more than one wall / AP in the open

Ars Technica has an excellent write up on how they test wireless devices. Their advice is to have no more than one wall between your device and your access point. This reduces the amount of time and power required to transmit one piece of data. As listed above, one of the challenges is that some devices may use way more of the time compared to how much data they are using.

Get the AP / router out of the corner of the house. Middle of the house is best. If it is the middle of the day and everyone is on a conference call, don’t have anyone go to the far corner of the house, as that device will use more airtime.

Don’t surround your AP with extra material. That will hinder the operations of the radio. Don’t hide it away behind a pile of books as they will cause interference. Microwave ovens are known to interfere as well.

Buy a mesh network

A mesh network allows you to place multiple APs in your home with near zero work. This allows you to better cover your house. Hopping through a mesh will take twice as much airtime as communicating directly over the same quality connection. However, the goal is to dramatically improve the quality of each connection. Have the mesh nodes transmit through no more than one wall. Both Amazon eero and Google Nest Wifi are mesh network setups that should be pretty easy to install.

Get on 5 GHz

All modern devices that are going to do any streaming from nearly the past decade are going to support WiFi on 2.4 GHz and 5 GHz. Your modem / AP is very likely going to support both as well. Things like connected outlets, lights, thermostats, and similar are going to be 2.4 GHz only, but that’s fine since they don’t use much bandwidth or airtime.

The advantage of 2.4 GHz is that it is quite powerful and can punch through walls. This is good, until you talk about punching through your neighbor’s walls and having to compete with their devices. This is why channels are used to split up traffic. There are 11 channels in 2.4 GHz WiFi in the USA. However, each channel that an AP is set to also uses +/- 2 channels. So in reality there are three non-overlapping WiFi channels in 2.4 GHz: 1, 5, and 11. So that chucklehead that sees there isn’t anything on channel 7 is interfering with channel 5 and channel 11. 5 GHz on the other hand has a lot more non-overlapping channels.

Your AP can support both frequencies. It would have to be old and/or odd to not support 5 GHz these days. If it currently configured to put out two different SSIDs (network names), one for each frequency band, then you are in luck. Just configure your devices to connect to the 5 GHz named network. There will be one trick through. If you have both networks configured, your device is likely to prefer the strong signal. That is likely to be 2.4 GHz, which is likely the one you don’t want to be connected to. Since 5 GHz has less ability to punch through walls, it won’t interfere with your neighbors. Or if you are feeling selfish, your neighbors won’t be able to interfere with you.

If your AP isn’t broadcasting a different network name for 2.4 GHz and 5 GHz, that is what the mighty Google is for. Give a search for your brand to see if you can find out how to split into two different SSIDs.

Final Thoughts

Go read The Ars Technica semi-scientific guide to Wi-Fi Access Point placement.

What do I do? I live alone in a home that I was able to run ethernet in before they put up drywall. So I do a combination of the above. I have several bits of home automation, so my device numbers are rather high. I have 16 devices plugged in via Ethernet. If it can’t be plugged in, it is on the 5 GHz band by splitting my network names. I have over 8 devices on 5 GHz. The final 13 or more are on 2.4 GHz because that is the only band they support. These are sensors and other low network usage devices. I have a dead spot in my home as my AP is poorly placed. I have picked up a second AP that I will turn into a mesh device, but it will be connect to my network via a cable.

Number of devices is fuzzy as Home Assistant is monitoring my MikroTik route, and I’m being lazy and only looking at connect devices. Some of my devices are currently offline, and thus are not reflected in the count. I could probably place my AP in a better location, but given where it is I can make it provide good coverage of my lower floor. My upper floor can then be well served by the second AP. It also gives me an excuse to try and us PoE (power over ethernet).

You can also get inSSIDer to see how much different channels are being used. From the little bit I can see of my neighbors 2.4 GHz channels, some of them are spiking to 60% use the few times I’ve paid attention. I probably can’t see the house beyond them that would cause interference for them. So I suspect they are seeing laggy behavior on those devices.

midPoint Grouper Connector 0.6 setup

midPoint is an IAM solution from Evolveum, and Grouper is a IAM solution from Internet2 that can do wonderful things together. There is a great midPoint/Grouper demo available, with a set of instructions. However, there isn’t a lot of documentation on how to implement what is done in the demo for your own production instance. This post is what I did to make it initially work for NDSU. I still have work to do to take it to production.

Listed files can be found in the repo at either midPoint_container/demo/grouper/midpoint_server/container_files/mp-home/post-initial-objects or midPoint_container/demo/grouper/midpoint-objects-manual/tasks/ for the tasks.

First, you need to get the connector from the repo at midPoint_container/demo/grouper/midpoint_server/container_files/mp-home/icf-connectors/ and put that in the icf-connectors dir for your midPoint. You also need to update your schema to include:

<xsd:complexType name="OrgExtensionType">
        <a:extension ref="c:OrgType"/>
        <xsd:element name="grouperName" type="xsd:string" minOccurs="0"/>
        <xsd:element name="ldapDn" type="xsd:string" minOccurs="0"/>

  <xsd:complexType name="ArchetypeExtensionType">
        <a:extension ref="c:ArchetypeType"/>
        <xsd:element name="grouperNamePrefix" type="xsd:string" minOccurs="0"/>		<!-- e.g. ref:affiliation: -->
        <xsd:element name="ldapRootDn" type="xsd:string" minOccurs="0"/>		<!-- e.g. ou=Affiliations,ou=Groups,dc=internet2,dc=edu -->
        <xsd:element name="midPointNamePrefix" type="xsd:string" minOccurs="0"/>		<!-- e.g. affiliation_ -->
        <xsd:element name="midPointDisplayNamePrefix" type="xsd:string" minOccurs="0"/>		<!-- e.g. Affiliation: -->

And don’t forget to restart midPoint after these changes.

In my particular setup, I don’t have LDAP or a similar concept. So I’m removing some of those bits from the existing setup from demo-grouper. I also am not bringing in extra affiliations and roles, although that is certainly something you could do. It might even make some sense to better configure resources. However, you still need the following in their entirety: functionLibraries/100-function-library-grouper.xml, objectTemplates/100-grouper-template-user.xml, org/100-org-generic-groups.xml, roles/200-metarole-grouper-provided-group.xml.

You will also need archetypes/300-archetype-generic-grouper-group.xml. In my particular case I removed the metarole-ldap-group assignment. Don’t forget to change the xmlns:ext namespace in this file to match your schema extension if you rolled it into your own. In my case, I also remove the ext:ldapRootDn entry as I don’t have LDAP in play from midPoint (that is handled by Grouper).

You also likely need to bring in the objectTemplates/100-grouper-template-user.xml and then make that your default user template.

After that, you’re now ready to bring in the resource from resources/100-grouper.xml. There are several values that you need to configure. You can find instructions on the Internet2 page for the connector. You will likely need to add a virtualHost configuration entry to get it to connect to something other than the default vhost. The main part of the connector uses Grouper’s REST API, and it supports using midPoint constants to store username and password so that you don’t have to hard code that into a file that you want to include in your repo. Unfortunately, I haven’t found a way to get that to work with the AMQP connector configuration. Update the matching and exclusion patterns as you see fit. There is a chunk of code that covers inbound mapping, and it contains a switch to go between different archetypes depending on various matches. For this beginning work, I just deleted the switch statement and left the default archetypeOid alone as that matches the imported archetype. In the future we will likely want to do something more complex.

You can now import 100-grouper.xml and make sure that it checks out when testing the connection. Importing tasks/task-reconciliation-grouper-groups.xml gives you the reconciliation task to run against Grouper REST services. This should work and import your groups that match what you have specified. It should also import users, but it requires a recomputation of users if you do it this way. It appears that this task throws an error if you delete a group that has been imported.

Brining in tasks/task-async-update-grouper.xml gives you the ability to use the async part of the connector. This is reading change messages out of RabbitMQ that Grouper has sent. This will handle group additions that match the pattern, along with deleting groups. It will also handle member adds and removes. For it to show up on the group, the trigger scanner needs to run like with other resources.

Now you can go about figuring out how to get it to do useful work. Our plan is to have midPoint drive the subject DB for Grouper. Grouper will then drive membership into, among other things, services that care about user attributes, like Google Analytics or Qualtrics. That will come into one of these midPoint Orgs, which will then provision out via different connector. That connector will need to take advantage of disable instead of delete with delayed delete, which is the next task to figure out.

Ambiguous response in Duo Web


Duo Security provides a range of multifactor options for developers to use in their systems. It is popular in many industries, including higher education.


The method for performing Duo MFA in a web page is called Duo Web. This is a fairly straightforward JavaScript that is added to a page, which brings in an IFRAME to perform MFA. The web application sends the username of the user to perform MFA, and Duo Web will respond back with a signed request once “MFA” has been completed.

Duo operates based off of what are called integrations. These integrations have several configuration settings that impact how MFA is performed. Some of those settings may result in what most would consider a bypass of MFA.

The key setting I found was the one that allowed usernames that weren’t enrolled in Duo to bypass MFA. This bypass of MFA has the exact same Duo Web response as someone who performed an MFA action. The Duo Web response indicates that the user got pass the integration, NOT that they performed MFA. Actually performing MFA is up to the configuration of the integration to ensure that all paths result in what that particular organization considers to be MFA.

Further complicating the issue, is that none of the documentation for Duo Web indicated that this was a potential path that could result in bypassing MFA.


This ambiguous response could result in systems that require MFA not being protected by MFA.

It is common in early deployments and large organizations such as universities to roll out their deployment of MFA in stages. So it is reasonable to assume that not everyone is enrolled. Even an ambitious university is unlikely to have their admitted students enrolled in MFA while admitted students complete early application tasks.

So we have a pool of accounts that will not have MFA enabled. If the integration is configured poorly as noted above, the protected application will let individuals through without those accounts performing MFA. However, higher education also is a heavy user of federated authentication through the various SAML, CAS, OAuth, etc protocols. The attributes released by SAML 2.0 and CAS 3.0 IdPs (identity providers) can include whether or not MFA has been performed. There is an international standard, REFEDS MFA, for asserting this fact in SAML.

If the IdP integration is incorrectly configured, it will then assert that MFA has been performed for every user at that institution. This assertion is then accepted by whatever service providers are accessible by that account. If that service provider requires MFA, any of those users would be allowed in. This takes the incorrect configuration of a single site, and turns it into a problem for external service providers.

Most would consider this to be a problem with incorrect configuration. There are certain communities out there that already knew of this issue. The issue is that the documentation in most cases isn’t clear that this could be a problem.


Duo’s documentation hid this configuration behind words like “etc”, and “secondary authentication”. This is a problem. The only way to know about the configuration issue is to find it in testing. Duo’s documentation for Duo Web has been updated, in a way that I hope is more clear about the problem.

The best solution is to not configure the integration to allow unregistered users through. Both CAS and Shibboleth IdP have configuration options to only trigger the MFA workflow on certain accounts. My solution was to enable MFA workflow through AD group membership, and to set the integration to require MFA for all accounts presented. Individuals that aren’t enrolled for MFA never pass through the Duo Web workflow step. Therefore MFA is never asserted for those users.

What is untested is how any other configuration in Duo integrations impact this. It is possible that groups may also impact how this could be abused and how it needs to be managed.

As of CAS 5.2, CAS performs a Duo Auth API call before going to the Duo Web workflow. This preauth request is made WITHOUT an IP. An allow at this step in the process without an IP means that the user won’t do any sort of MFA. CAS then skips the MFA workflow and doesn’t assert that MFA happened. If this comes back as requiring MFA, the Duo Web workflow triggers, which may allow the user through based off of a remember for N days, or even if the user is at the correct IP behind a locked door if the integration is so configured. If preauth is called with an IP, then the allow returned may be from an unenrolled user being allowed through, or the user is required to perform MFA, but is at an allowed IP.


  • Duo PSIRT Notification – 2018-11-07
  • Duo PSIRT Conclusion – 2018-11-21
  • Publication – 2019-03-24

Line number leak in CivicPlus


CivicPlus provides a web platform for local governments. Included in this platform is the ability to send notifications to residents that opt in for those notifications. These notifications can be sent via email or SMS. They have their security FAQ, which answers several questions, except for the important one.

My local municipality became a CivicPlus customer in 2018.


West Fargo opted us in via the email address we were using for our utility bills. West Fargo has been pretty proactive about communication over a variety of channels with respect to what is going on in the community. I wanted to sign up for additional alerts. The site follows the standard mailing list method of sending a verification message to the address when changes are made, or the account is first signed up.

The change messages have a one time code in them to validate the change. However, the URL to view your settings is simply<email@address>&CID=255. That’s it to log into the site. When you are logged into an account, you can view the subscribed lists and the email address, which you already know. In addition, if the user has signed up for SMS messages, you can also see the last four of their phone number. The area code and exchange are dotted out, and those aren’t sent to the browser. Still being able to convert an email address into the last four of a number with zero effort is less than ideal. See Kreb’s recent article about Why Phone Numbers Stink As Identity Proof. This is a slightly different problem than covered in his post, but being able to convert email addresses into a partial number doesn’t help. Email addresses are generally a lot less private than phone numbers.


For a company that tries to put an emphasis on security in their marketing, their FAQ and site in general is conspicuously missing a vulnerability disclosure policy. I had to send an email to their help address asking about their VDP. That forced me to create an account in their ticketing system with ZenDesk. That naturally had certificate problems. I ended up having zero success trying to communicate with the company.

I also notified my municipality about the issues I found with their site. They were responsive, and were able to directly follow up with the company. When I made an inquiry a couple of weeks ago to close this out, WF sent me the response from the company. Thank you to West Fargo for working with me on this.


Response to the city from CivicPlus was along the line that it was just the last four of the phone number, and it wasn’t likely that “another citizen is going in with a specific email address”. I wasn’t worried about another citizen, I was worried about someone up to no good. You don’t design security systems around the idea that everyone will be a good actor.

What they should do is just send an email with a one time unique code to let the person back into their account. That would eliminate the need to create another account, and prevent anyone in the world from poking at random addresses and perhaps seeing parts of phone numbers. But instead, no action has been taken as the company doesn’t see it as a problem. They also don’t have a security researcher response mechanism to go along with their security claims. So ideally they’d post a vulnerability disclosure policy.


Don’t put your phone number in for CivicPlus notifications. It’s really that simple. Likely most people aren’t signing up for the SMS messages, but there is even less reason to do so now. CivicPlus is in a lot of cities, so your town may be using their services.

Communication Timeline

  • November 26, 2018 – Email to only address available on site for VPD
  • November 26, 2018 – Ticket created from email
  • November 26, 2018 – Contact with West Fargo
  • November 28, 2018 – Ticket closed with zero information
  • Early March 2019 – Direct Message via Twitter to find security contact, no response
  • March 5, 2019 – Contact with West Fargo to see if they heard back. Quick response and info from November 28, 2018 response from CivicPlus saying it isn’t a problem.
  • March 20, 2019 – Publication

Building a hacking village

NDSU IT, ND Education Technology Council, EduTech, and the ND Information Technology Department are putting on their annual ND Cyber Security conference next Thursday. I’ve spoken at this iteration of the conference twice, and was looking to put something together for this year’s conference.

We had an earlier iteration, and at that one of my co-workers and I put on a workshop where participants executed a variety of attacks against vulnerable systems. This was back in something like 2005, so it was challenging to put on. My boss had suggested doing a CTF this year, but seeing that I’ve never done a CTF, that sounded a bit intimidating.

I’m a Linux guy that does Java development, but attends a lot of security conferences. Last calendar year I attended four, ours, Dakota State University’s DakotaCon, DerbyCon, and The Long Con up in Winnipeg. I see a LOT of attacks against Windows. However, I’ve never done them. So I came up with the idea of having a hacking village. Most security conferences have a lock picking village, so let’s do something like that, but with computers.

So with the help of someone in our IT security office, we’re putting on a hacking village. We’re also enlisting the help of a couple of local pen testers. So the lab will include general attacks, Windows domain based attacks (so I have an excuse to do them), wireless attacks, and we should also have a lock picking village in the same room.

I’ll be posting the instructions from the village. Also hope to post about how it goes. This might be something other conferences or user groups may want to try in the future.

Social Engineering Toolkit in the Hacking Village

This is the first of a series of posts describing how to perform the various types of attacks that are available to try in the Hacking Village at the ND Cyber Security Conference. These will serve as instructions during the conference, and as a resource after the conference.

First up is the Social Engineering Toolkit from Dave Kennedy of TrustedSec. This toolkit demonstrates how to perform a variety of social engineering attacks.

From the Toolkit:

DISCLAIMER: This is only for testing purposes and can only be used where strict consent has been given. Do not use this for illegal purposes, period.

The attack method to be tested is cloning a website to harvest credentials.

  1. Open a console on Kali Linux 
  2. setoolkit and then enter to launch
  3. 1 for Social-Engineering Attacks.
  4. 2 for Website Attack Vectors
  5. 3 for Credential Harvester Attack Method
  6. 2 to clone a site
  7. Enter to accept the default IP
  8. or a login form your control to clone
  9. Enter to understand what they are saying
  10. Launch Firefox
  11. Go to http://localhost to load the page
  12. Any credentials that you enter in will be posted back to SET in plain text. DO NOT USE REAL CREDENTIALS.
  13. Go back to SET console and see provided credentials

More instructions and operations can be found on the SET website at

This was originally posted at the NDSU Tech Blog.

Username only authentication in T2 Systems Parking


This vulnerability was discovered in May of 2017.

T2 Systems is a parking systems provider to multiple different organizations. NDSU uses it via the North Dakota University System contract. NDSU uses it to allow employees and students to buy parking permits for certain lots. According to the T2 webpage, other institutions use it to check if scanned license plates are allowed to park in certain lots.


Workflow for NDSU employees and students to renew their parking permits was to log into the appropriate PeopleSoft system. Then to navigate to the parking system. At the time, as part of the step to get to choose the parking information, a Duo MFA was triggered for employees. Choose the institution to park at, then passed to the T2 Systems parking application without further authentication.

Looking at the network traffic showed that the entire request to authenticate against the parking system was:


Post Data:



Testing with a willing coworker resulted in me being able to directly access their records with only knowledge of their EMPLID. EMPLID is likely not a secret value across most users. There were no other tokens in use. 


At NDSU, how the system is used is reduces the impacts. Information from other universities suggested that individuals registered their license plates in the system, and those were checked by scanners. An attacker could easily remove valid license plates and/or add their own to other records. This also would allow an attacker to translate EMPLIDs to license plates via this system.


NDUS in August 2017 move their T2 Systems parking integration over to SAML2. 

There is no way to protect the original authentication mechanism which passes moderately well know values as its secret without any sort of cryptographic protection.


2017-05-16 NDUS notified and responsed
2017-05-16 T2 Systems Notified and responded
2017-05-18 last communication from T2 Systems
2017-08-14 NDUS switches to SAML