Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Nukelear v.2
Jun 25, 2004
My optional title text

Defenestrategy posted:

As part of my role as infosec guy, I've been tasked with doing "employee education", and so every two months I've been putting out a short company newsletter that has broad stroke significant company affecting infosec event summaries, such as successful phishing attempts on employees, or foreign IP logins,etc as well as a "infosec tip of the day" kind of thing where it outlines a thing to be slightly safer, like enabling MFA or signing emails with PGP, stuff like that.

My question is: Am I just pissing in the wind with this, or is this kinda thing worth while?

Maybe somebody will read it, but yea it's mostly CYA so they can't feign total ignorance when an incident occurs.

I've found that more active participation based events yield better dividends than just tossing reading material out into the world.
Phishing campaigns will tell you how many people will fall for obvious attacks and the user then sees oh hey maybe I'm not so clever about spotting these.
Same with doing capture the flag events with developers instead of just watching boring videos about owasp top 10.

Adbot
ADBOT LOVES YOU

Nukelear v.2
Jun 25, 2004
My optional title text

Defenestrategy posted:

I had an idea for a presentation, where I would use OSINT to gather information on a volunteer and then present a bio on them to be used for nefarious purposes and then show how to lock the information down to an extent, but I fear the ramifications on teaching the work place how to efficiently google-fu/harvester/etc their coworkers.

Yea that's gonna get creepy super quick when you start presenting pictures of their kids and house. Really people aren't going to stop using social media anyway. I would imagine OSINT isn't really the biggest threat you have though, so I'd think more about how to target that.

Nukelear v.2
Jun 25, 2004
My optional title text

Martytoof posted:

I don't know how strictly you have security tools defined but I guess there's other ways to achieve monitoring and "security tooling" than with processes on each container. Things like Twistlock and Aqua don't run in each individual container IIRC and still do some overall monitoring. I also don't have a super strong opinion but it seems like running 200 instances of all your security tools in a K8s cluster would be a recipe for A Bad Time™

Yea I'd be curious what security tooling they are running inside all the containers. All the K8S tools I can think of run as daemonset like you mentioned.
For us with Twistlock, it's only for 'serverless' platforms like ECS/Lambda where you have to jump through those hoops just to get something with a fraction of the capability.

Nukelear v.2 fucked around with this message at 16:21 on Sep 10, 2021

Nukelear v.2
Jun 25, 2004
My optional title text

Proteus Jones posted:

According to the article, that's all the latest patch does anyway.

The diff looks to be a might more thorough than just that https://issues.apache.org/jira/browse/LOG4J2-3201, looks like you'd have to change quite a few settings.

Nukelear v.2
Jun 25, 2004
My optional title text

Meanwhile Prisma Cloud Compute (Twistlock) still can't track this CVE, would have been real easy to figure out our exposure if this was working.

Nukelear v.2
Jun 25, 2004
My optional title text
Are there any details on the vulnerability for 1.x? Especially around say any sort of equivalent properties to log4j2.formatmsgnolookups that could be set to mitigate the issue.
Yes it's been EOL for a mere 6 years, but we can't be bothered to upgrade.

Nukelear v.2
Jun 25, 2004
My optional title text

Dread Head posted:

If you want to believe this: https://github.com/apache/logging-log4j2/pull/608#issuecomment-990494126

Then it should not be impacted, probably the most official thing you will get. We only shutdown stuff with 2.x running.

I want to believe. Appreciate it, I totally missed ceki's post on that page.

Nukelear v.2
Jun 25, 2004
My optional title text

BaseballPCHiker posted:

Good loving lord. Now someone doesnt want to update because a change freeze was going to go into affect next week.

I should just start live tweeting this poo poo. I cant believe this company hasnt been just totally annihilated by ransomware or something yet.

I would say sticking with 1.x for a few weeks isn't an awful idea while you wait to see how this shakes out. As r u ready to WALK pointed out, the conditions for the 2019 CVE are pretty limited and can be confirmed easily.

Log4j 2 has shown itself to have made some pretty fundamental mistakes and was pretty hastily patched. Pushing people from 1 to 2 haphazard seems a bit risky since this is potentially only the tip of the iceberg for 2.

Nukelear v.2 fucked around with this message at 20:04 on Dec 13, 2021

Nukelear v.2
Jun 25, 2004
My optional title text

BaseballPCHiker posted:

Yup...

My boss now wants us to scan every desktop in the environment for any potential apps running log4j. I've tried to explain that if the attacker is actually able to get onto a desktop and input malicious code to an app running on the desktop than its already game over.

Not saying its not worthwhile to document these apps and to work on getting them patched as necessary, just that it shouldnt be a work over the weekend priority. I am tired.

If you're a Defender shop, they just added Log4j support. We found one hit in our environment.

Nukelear v.2
Jun 25, 2004
My optional title text
If it helps, we ended up doing Zscaler to cover a lot of Zero Trust principles at our shop for user access. Launched right at the start of covid lockdowns, and it's been a nice pivot off VPN.
Users machines get device posture assessments, SSO with MFA, fairly granular segmentation of what resources they can reach internally, the usual network filtering, sandboxing, dlp, etc.

The term is a bit loaded so defining goals and objectives for all your different use cases is important.

Nukelear v.2
Jun 25, 2004
My optional title text
On the Cloud end of VM, I've been playing with Orca and Wiz lately. Their workflows for finding and prioritizing vulnerabilities and automating Slack/Jira/ServiceNow alerting is pretty slick.
Combining VM and CSPM lets you do some really nice things.

Nukelear v.2
Jun 25, 2004
My optional title text
The other Okta take away, review what developers call internal tools. Watching them try to downplay the abilities their 'Super User' app has isn't a position I'd want to be in.

Nukelear v.2
Jun 25, 2004
My optional title text

KillHour posted:

Does Zoom have a stake in Otka or something?

Edit: Also this is why you don't use your real name on social media unless you're like a celebrity or something.

I would guess if you are a customer that is important enough you could get a redacted version of the breach report from Okta. Presumably this would be under NDA and that's why he's fired.

Nukelear v.2
Jun 25, 2004
My optional title text

Potato Salad posted:

lol lmao Okta didn't have an obligate password vault & permissions checkout system

what the gently caress

Okta sells https://www.okta.com/products/advanced-server-access/, we ended up not buying it because it doesn't work on Domain Controllers. Guess they ran into the same issue.

Nukelear v.2
Jun 25, 2004
My optional title text
In the interest of sparking discussion and maybe helping folks in situations like Baseball's position not let their skills atrophy. Here's how those two situations would go down at my company.
This is by no means the one right way to do it, it's just the way we do it. 200-ish person tech company for background.

Stolen access key scenario:
- First off, creation of access keys and IAM users are disallowed by SCP. Teams need to ticket IS why they are needed, the only acceptable answer is CICD (preferably only to seed vault dynamic access) or a third party app that can only do access key/secret key.
- Second, IAM users should never have console access. Humans get access via SSO + JIT. Having an IAM user with console would have failed them in CSPM.
- If the alert for TOR access of an access key came in through guardduty, automation would disable it in realtime. If it came from manual threat hunting, IS is empowered to disable it, depending on the account/access level I might talk to team before I disable, but most likely I'd just disable it first.

Wordpress compromise scenario:
- Our CSPM would have been failing them for having open security groups, we'd keep escalating through management if they don't fix it.
- Daily lambda enumerates all public facing assets in the org and feeds that to VM external scanner, hopefully would have picked up the ancient wordpress and follow the same escalation, but we get a lot more aggressive on external facing vulns.
- If guardduty caught the C2 activity the automated IR process would have kicked in, if manual threat hunting, then I'd fire off the IR process
- IR process fires an alert to the owner tag on the account, detaches the NIC but leaves the machine running, snapshots the instance and sends it to an IS account for forensics.
- If we feel we got a good idea of what was going on from the snapshot, we shut it down and teams restore however they do. If we think we need to inspect memory, then we get back into the instance and start that. I've never had to do that.

Nukelear v.2 fucked around with this message at 17:14 on Jul 1, 2022

Nukelear v.2
Jun 25, 2004
My optional title text

BaseballPCHiker posted:

Are you hiring!?!?

Heres, in less detail, how it would've been handled at my previous, competent employer.

  • Not have been allowed in the first place. Restrictions of public security groups via Config and conformance packs.
  • Alerts generated by posture tool for security to review.
  • GuardDuty would've had a higher chance of detection based on custom threat intel lists being used.
  • Snapshot of instance would be taken to be analyzed in malware lab.
  • We'd have logs easily accessed not just for vpc, cloudtrail, etc but system logs for the instance. Would be able to better identify how they gained access, what they did, etc.
  • A bridge call would've been spun up with heavy hitter management types. Discussions around disclosure to customers and with legal would take place.

EDIT: The logging part more than anything right now is what absolutely kills me. Beyond knowing the vulnerability that allowed them to compromise the instance we dont really know what they did once they were in, where traffic was going, what they did, etc. We're totally loving blind, and no one gives a poo poo! I couldnt even run Athena queries against our cloudtrail logs because they're setup so loving stupidly that people were concerned about the costs! Thats the level I am dealing with here. It was just a giant, "Well we shut it down, no harm done derpa derp" and move on to the next thing from the company.

We were hiring, but I filled my last two spots the other week.

That's a solid process, but dang now I have to look into getting custom threat intel into GuardDuty, didn't notice I could do that.

Effective logging is hard, expensive and people fall into that 'what we don't know can't hurt us' mindset. Our teams first value statement is: Remediation is negotiable, logging and event capture is not. Which I didn't write, but I've grown to really appreciate.

Nukelear v.2
Jun 25, 2004
My optional title text

BaseballPCHiker posted:

How many of you are security engineers or similar? And do you focus day to day on tools/software or are you doing design approvals? Or more broadly whats your day to day like.

More and more it appears that they want me to just own a few security tools and do nothing but that.

Everyone in our team is generally a security engineer of some level, the divisions for us are primarily around which security pillar you support. Some pillars like vuln management, are basically all about taking tooling output and driving actions with teams. I do app and cloud sec and that ends up with a pretty solid mix of building out pipelines for "shift-left" tooling (sast,dast,iac,etc), CSPM rule writing, deploying cloud native tooling for guardrails, app arch review & approval, etc. Think you'd really need to ask during the interview exactly what you'd be doing, the job descriptions tend to be junk.

One of the slightly more interesting things we did to escape the spreadsheet hell was building an automation platform. Basically just a query lambda and step function to pull results from all our tooling and centralize it into a rds. Then a powerbi frontend that shows overall status and information on specific pillars. At a high level we give letter grades which make it easy to present to senior management, as well as a line chart showing their score over time so they can see the direction they have been trending. Then for the teams themselves, they can drill into the powerbi detail tabs and get the more traditional spreadsheet view of all the issues that are affecting their score. This centralizes all tooling into one spot and saves us from having to train and grant access to folks to a dozen or so different things.


BaseballPCHiker posted:

Also for those who use AWS GuardDuty on a regular basis, they've rolled out new malware detections for EBS volumes that looks pretty cool. Playing around with it a bit this morning and yesterday. Makes me think at some point they'll use this same technique to improve Inspector for vuln scans. Some vendors, Wix comes to mind, are doing that for vuln scanning right now. Basically take a snapshot of your instance, move it over to their account to scan and then report back any findings. Pretty slick!

Everyone seems to be getting on the agent-less scanning bandwagon lately, and it is a very nice model for ensuring 100% coverage. AWS however is still pretty basic. Some of the other vendors doing this are able to construct fairly elaborate attack paths by tying together vm/cspm/secret scanning tooling into one database. Having to grant third parties that much access to your environment can be heartburn for some folks though. AWS' vuln scanning I've noticed is also pretty spotty, missing some containers with log4j here that other tools found.

Nukelear v.2
Jun 25, 2004
My optional title text

Klyith posted:

Does an analogous concept of pair programming exist in infosec?

It seems to me like having a 2nd opinion available would be even more powerful for infosec than it is with programming. If you commit a giant loving blunder it seems harder to catch it post-hoc than in code. Anyways it would be really helpful when somebody is about to save master-passwords.txt to a share for someone sitting next to them to say "wait that seems super dangerous".

Honestly it seems like IS tends to be even worse than normal developers when it comes to basic appsec principals. This powershell script sounds like it's from IAM or IR teams and should have lived in a repo where it would probably have had proper RBAC and secret detection would have flagged it, but instead it was sitting in a network share.

Nukelear v.2
Jun 25, 2004
My optional title text

SlowBloke posted:

Yelling Schrems at the top of your lungs doesn't make your european hosted data safe from yanks. If they want your data, they will happily blackbag you or whichever admin is easier to grab and turn kneecaps into fine powder until password and mfa to fetch what they want are provided. Going all "putting data in azure makes it possible to be exfiltered by a random passerby" as if hetzer or ovh are bastion of security are false hopes. If you have hardcore high risk data, your government has safe facilities for that, average shitposting doesn't require those, using the same baselines for standard LoB is nonsense. There has been no government entity in Europe fined for using microsoft 365 so your point doesn't make much sense.

Honestly they don't even have to do that. The western intelligence orgs are all allied and share information, so the CIA calls the RCMP who calls MS Canada instead of the CIA calling them directly.

Nukelear v.2
Jun 25, 2004
My optional title text

Sickening posted:

Can any of you recommend an app control solution that isn't hot garbage? I need it to do some basic things.

1: A collective app inventory of all apps across workstations/hosts
2: A system of allow/deny list of app execution
3: decent macOS support

Shot in the dark, but chime in if you have recently worked with something.

You can check out Defender for Endpoint, it can do most of this. It can't deny list on Mac, at least last I checked, but I expect it will soonish.

Nukelear v.2
Jun 25, 2004
My optional title text

Rescue Toaster posted:

https://ubuntu.com/tutorials/how-to-verify-ubuntu#4-retrieve-the-correct-signature-key

Ok, please tell me I am crazy. Am I the only person in the world who understands that this does nothing? You look at the signature you got for your ISO, then you specifically download the exact public key that you already know was used to produce that signature, then you verify it. I don't want to know if my SHA256SUM file has a valid signature, I want to know that it was signed with the right private key. Looking at the signature to determine which key is right is the dumbest possible path you could take.

There should be a big 'What is the current correct Ubuntu ISO signing key fingerprint?' FAQ. And multiple websites that maintain a list of the current signing key fingerprints for various distros, so someone could compare all those sources to make sure they all match, etc..etc..

EDIT:
Manjaro does a good job, saying to pull from their gitlab or from ubuntu's keyserver but by name instead of by fingerprint. https://wiki.manjaro.org/index.php?title=How-to_verify_GPG_key_of_official_.ISO_images
Arch does okay too, saying to use wkd or linking to the exact fingerprint from ubuntu's keyserver. https://archlinux.org/download/
Linux mint specifies which fingerprint exactly to get from the ubuntu keyserver: https://linuxmint-installation-guide.readthedocs.io/en/latest/verify.html
Debian also specifies fingerprints: https://www.debian.org/CD/verify
Qubes talks in great detail (unsurprising) about how to verify the fingerprints: https://www.qubes-os.org/security/verifying-signatures/

So maybe it's just ubuntu being horrible fuckups and encouraging terrible practices?

I would assume they are only serving valid Ubuntu signing keys from hkp://keyserver.ubuntu.com

Nukelear v.2
Jun 25, 2004
My optional title text

some kinda jackal posted:

Like to me it FEELS like DAST should be the security equivalent of unit testing in the sense that it can't just be "I pointed a web vulnerability scanner against my landing page" -- there has to be some more introspection you give the tool, app flows, specific test cases, etc., but I'm doing a lot of assuming here.

If I'm right I guess I'd love to see a sample of how this is done conceptually, along with some sample tools because just googling around I get a lot of what I see is my "can't just be" scenario type tools, where maybe you plug in a credential to give the tool one level introspection into your app but nothing particularly methodical.

My shop primarily produces API's so our DAST tooling is geared toward that, general web dast tools like zap or burp didn't seem to do an amazing job.

The commercial tool (look at noname, apisec, etc) we use ends up consuming the schema of your api's (swagger, api gateway integration, etc), you give it a few different credentials to test with and it, with some guidance, it will generate playbooks, basically test scripts/unit tests, that look for specific scenarios, usually owasp api top 10. Say broken object authorization, post with user a and then get with user b and if it finds the data then you've got an issue. The whole system can be controlled via api with a cli, so we wrote cicd jobs that dev teams include after they push to staging. It's take a bit of time to run and some care and feeding if the api's change dramatically.

SAST won't find the gross logic errors the permeate API's so basically it's a way for us to be comfortable that teams don't introduce any major issues between pen test engagements. Also a backstop against lovely pen testers.

Nukelear v.2 fucked around with this message at 14:53 on Oct 4, 2023

Nukelear v.2
Jun 25, 2004
My optional title text

Subjunctive posted:

do DAST tools do fuzzing, generally? can you guide them with format and other information?

The ones I've used do. Beyond just feeding the app out of spec data types, we have some industry specific identifiers that we look for so we build tests that attempt to get/put data in that format.

Adbot
ADBOT LOVES YOU

Nukelear v.2
Jun 25, 2004
My optional title text

Rescue Toaster posted:

I'm dealing with a lovely device that has ancient HTTPS and modern firefox is officially reporting "gently caress You" when connecting to it.

An old Firefox 88 says the device uses TLS 1.0, TLS_RSA_WITH_3DES_EDE_CBC_SHA 112Bit. Which, yeah... But old firefox could connect with the about :config tls deprecated setting on. The cert is RSA 1024.
Modern Firefox version 100+ refuses outright regardless of settings, I'm assuming everything has been compiled out. openssl won't even handshake enough to report literally anything even with -security_debug_verbose switch.

The device's management interface is already on a VLAN, but even then I question going to http. Or maybe these algorithms are so absolutely pathetic these days that it's effectively no effort compared to http.

Is there some setting in modern firefox or chromium I'm missing? Building my own version of something? A VM with an old version of firefox that only connects to that VLAN and never gets updated forever?

'gently caress you' is a bit vague, but my heart says check the cert is presenting a value in Subject Alternative Name (SAN). Ye olde certs just presented a Common Name and that's been deprecated for a few years.
Ancient devices might be using an ancient cert process that is deprecated.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply