Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Roargasm
Oct 21, 2010

Hate to sound sleazy
But tease me
I don't want it if it's that easy

Uziel posted:

Just dove into powershell today when I got a request that seemed like it would be a pain in the rear end to write in a batch script: take a list of FQDNs and do an nslookup on them.
The only issue I'm encountering is getting the Dns Client info as I'm running on Windows 7 and Get-DnsClient is only available on Windows 8 and above despite Powershell 3.
How else can I grab the client DNS server name and IP address?

Heyy I did this exact thing last week. Use the .NET builtin class, it handles try/catch so much more gracefully than the wmic too.

PHP code:
foreach ($hostname in $hostnameList) {
    try {
    [System.Net.Dns]::GetHostByName("$hostname") | fl * |  out-string >> C:\users\admin\Desktop\goodhosts.txt
        }
    catch {
    "Couldn't resolve hostname: $hostname" >> C:\users\admin\Desktop\badhosts.txt
        }
    }
edit: Meh looks like it's not exactly what you're looking for. Your problem is going to be a pain in the rear end to solve if your hosts have multiple adapters in nonstandard configs and will probably involve something like 'name -like broadcom*' to avoid getting lots of junk data.

Roargasm fucked around with this message at 20:44 on Nov 16, 2015

Adbot
ADBOT LOVES YOU

Danith
May 20, 2006
I've lurked here for years
Is anyone else using powershell to submit information to a website? I was able to get it working with websites that use the username/password popup box but now my current challenge is sites that have a .aspx page and do a javascript postback. Was wondering if anyone had an example of logging into a site that uses that before I continue beating my head against the wall.

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

Danith posted:

Is anyone else using powershell to submit information to a website? I was able to get it working with websites that use the username/password popup box but now my current challenge is sites that have a .aspx page and do a javascript postback. Was wondering if anyone had an example of logging into a site that uses that before I continue beating my head against the wall.
I have done it on sites that use forms-based authentication (even ASPX with all its weird extra runtime-generated hidden fields), but I don't think the forms I was using were doing javascript postbacks.

You need PowerShell 3+ for this, but I was using Invoke-WebRequest, then modifying the form object in the result (combined with -SessionVariable on the first request and -WebSession on subsequent requests).

The steps required can vary a lot depending on the site you're tying to work with and the fields involved. is the site public? Can you show your existing code and where it's failing now? What the errors are?

12 rats tied together
Sep 7, 2006

I've also done it using the InternetExplorer object for front end web testing to help an old employer's QA department automate some of their more repetitive tasks. I did the same thing much easier though with the rubygem Watir, and I hear that PhantomJS is pretty well regarded for this sort of thing as well.

But, you're probably using it as some sort of powershell specific workflow so ruby or javascript is probably out of the question. The IE object might be easier to make sense of mentally because it literally just spawns a headless IE process that behaves just like you would in a browser. I didn't have to mess around with SessionVariables but I imagine there is a speed hit from having to basically emulate IE. Compared to the tests running in ruby, Powershell IE manipulation was about 4-6 seconds longer per test which added up rather quickly with ~8 tests per customer and 100+ customers.

Danith
May 20, 2006
I've lurked here for years

Briantist posted:

I have done it on sites that use forms-based authentication (even ASPX with all its weird extra runtime-generated hidden fields), but I don't think the forms I was using were doing javascript postbacks.

You need PowerShell 3+ for this, but I was using Invoke-WebRequest, then modifying the form object in the result (combined with -SessionVariable on the first request and -WebSession on subsequent requests).

The steps required can vary a lot depending on the site you're tying to work with and the fields involved. is the site public? Can you show your existing code and where it's failing now? What the errors are?

Site is www.azrxreporting.com. Been beating me head against this all day. Lemme just post what I said at another site -

quote:

Ok, this aspx thing is frusterating.

I create the web request
code:
$wr = Invoke-WebRequest -Uri $site -SessionVariable ws
set the form info to the login -
code:
$wr.Forms[0].Fields.ctl00_lvNotification__Login__Username = $user
$wr.Forms[0].Fields.ctl00_lvNotification__Login__Password = $pass
Try to submit it (new webrequest because I didn't want to overwright the old one at this time)
code:
$newwr = Invoke-WebRequest -Uri ($site + $wr.Forms[0].Action) -Method $wr.Forms[0]Method -WebSession $ws -Body $wr.Forms[0]
It returns status 200 OK but $newwr is showing the login page still.

So I install fiddler to see whats going on. Find some fields that are submitted when I go through the website login, so I add them to the webrequest
code:
$wr.Forms[0].Fields.Add("MyNewField", "stuff")
Do another submit and again it says 200 OK, and website is showing as the login/password page again.

I think it has something to do with the site going to default.aspx after you log in but when I try calling default.aspx after submitting the webrequest with the info it still returns the login page. (The URL looks like this on the login page: Login.aspx?ReturnUrl=%2fdefault.aspx. After logging in your at default.aspx)


I think at this point I'll just try to make something work by going through an internet explorer browser object and try to interact with the site through that


Reiz posted:



But, you're probably using it as some sort of powershell specific workflow so ruby or javascript is probably out of the question. The IE object might be easier to make sense of mentally because it literally just spawns a headless IE process that behaves just like you would in a browser. I didn't have to mess around with SessionVariables but I imagine there is a speed hit from having to basically emulate IE. Compared to the tests running in ruby, Powershell IE manipulation was about 4-6 seconds longer per test which added up rather quickly with ~8 tests per customer and 100+ customers.

I'm using it to learn powershell stuff and make life easier. I've been tempted to just use some macro program (iMacro maybe) but would like to get it through something that I can set up as a task and run.. also a script would be more likley to pass muster to prod than a macro program that pops up a bunch of windows. I may just end up doing an IE object :|

Danith fucked around with this message at 23:36 on Nov 18, 2015

Methanar
Sep 26, 2013

by the sex ghost
Possibly dumb question.

I've got a csv file with a bunch of fields in it and I want to modify some data in one of the fields.

I've tried a few different ways but this one has gotten me the closest so far.

code:
  import-csv .\text\data.csv | ForEach-Object { $_.status -replace "T", "Temp"} | Format-Table 
It works in the sense of it replaces all instances of T with Temp in the status field, which is great, but the status field is now the only thing outputted. The rest of the file is removed.

How can I modify data only in the status field and without removing all other fields.



I could just do this with get-content instead and it works fine, but I'd like to learn the csv specific way.
code:
 Get-Content .\text\data.csv | ForEach-Object { $_ -replace ",T,", ",Temp,"} | Format-Table 
The file looks like this.

FirstName,LastName,Position,Phone,Dept,Salary,Status,extraneousfield
Hugh,Kopp,Clerk,476-7454,Sales,34000,P,T
Dave,Lawson,Accountant,476-4356,Accounting,45000,P,T
John,Ross,Supervisor,475-3322,Engineering,76000,T,T

brosmike
Jun 26, 2009
In your non-working example,

code:
ForEach-Object { $_.status -replace "T", "Temp"}
...takes in each row as input and, for each row, outputs the result of running the command $_.status -replace "T", "Temp". The -replace operator outputs the input you give it, with the replacement you specified applied. You're only giving it the status field as an input, so it only outputs the replaced version of the status field.

What you want to do is to replace the content of the status field, and then output the entire row. To do that, you want something like:

code:
ForEach-Object {
    $_.status = $_.status -replace "T", "Temp"
    return $_
}
If your logic starts getting more complicated, you might want to separate out the sanitization logic into a separate filter to make it easier to read/debug. Then you'd end up with something like:

code:
filter Sanitize-Row {
    $_.status = $_.status -replace "T", "Temp"
    return $_
}

Import-Csv .\text\data.csv | Sanitize-Row | Format-Table 
(note that there's also an Export-Csv cmdlet you could use instead of Format-Table, if you need the data back to a file for analysis in Excel or something)

Danith
May 20, 2006
I've lurked here for years
Oh my god, going through a browser object is so easy. And if I'm unsure of the element I can create the browser object - $ie = New-Object -ComObject InternetExplorer.Application
Make it visible - $ie.Visible = $true
Navigate to the page, click on the element and do a $ie.Document.activeElement to get all the info

Methanar
Sep 26, 2013

by the sex ghost

brosmike posted:

In your non-working example,

code:
ForEach-Object { $_.status -replace "T", "Temp"}
...takes in each row as input and, for each row, outputs the result of running the command $_.status -replace "T", "Temp". The -replace operator outputs the input you give it, with the replacement you specified applied. You're only giving it the status field as an input, so it only outputs the replaced version of the status field.

What you want to do is to replace the content of the status field, and then output the entire row. To do that, you want something like:

code:
ForEach-Object {
    $_.status = $_.status -replace "T", "Temp"
    return $_
}


This was very helpful. Your command worked for some of the things I did but not others.

Ultimately I ended up using something more like this.

$datacsv = import-csv .\text\data.csv

$datacsv | ForEach-Object { $_.status = $_.status -replace "T", "Temp" } -End { $datacsv } | export-csv data2.csv

$datacsv | ForEach-Object { $_.FirstName = $_.FirstName -replace '^([a-z])[a-z]*', '$1[redacted] ' } -End { $datacsv } | Format-Table >> whatever.txt

Swink
Apr 18, 2006
Left Side <--- Many Whelps
I often export mailboxes from exchange using New-MailboxExportRequest. I then check the progress by rnning Get-Mailboxexportrequest before going on with the next step in the task. (usually delete mailbox)

How can I write a script that will perform the export and wait for the export to complete before running another command?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
You'll probably want a while loop:
code:
while (Get-Mailboxexportrequest -neq ???) {
sleep 60
}
So I don't know what the Get command will look like but if you can generate a single command to figure out if it's complete or not, put that in the while. While that command returns true, it will execute what's in the loop. Once that command returns false, it will exit the loop.

If it's more complicated to figure out if it's done or not, you can use a do while (or do until) loop. A do loop will always execute at least once, so you can put in code to check if it's done and put that into a binary variable, and then use that variable in the while or until. While and until are just antonyms, so while ($variable) is the same as until (-not $variable).

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy
Anybody using PowerShell DSC looking to import SSL certificates via PFX files? A while ago I wrote a DSC resource to do just that, and now it's part of Microsoft's community resources.

Shameless self promotion: https://www.briantist.com/project/xpfximport-dsc-resource-for-importing-certificates-and-keys/

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy
http://mspsug.com/2015/12/08/mississippi-powershell-user-group-december-2015-virtual-meeting-tonight-at-830pm-cst/

Another (virtual) Mississippi PowerShell User Group tonight, 8:30pm CST. Open forum discussion meeting.

hooah
Feb 6, 2006
WTF?
I need to clean up my music directory by removing hidden album art files that iTunes shat everywhere and renaming the originals. I figured this would be as good a time as any to start learning PowerShell. Things were going reasonably well until it came time to modify file properties. I'm running the script from my Music directory, and most of the problematic files are two levels deep, which seems to be throwing off the full path names. Here is my script so far:
code:
$folders = Get-ChildItem -Recurse | Where-Object {$_.PSIsContainer}
foreach ($folder in $folders){
    $items = Get-ChildItem $folder.FullName -Force | Where-Object {!$_.PSIsContainer}
    if("Folder.jpg" -in $items.name -and "folder (1).jpg" -in $items.name){
        del Folder.jpg
        rename "folder (1).jpg" folder.jpg
    }
    elseif("Folder.jpg" -in $items.name){
        attrib -A -H -S Folder.jpg
        rename Folder.jpg folder.jpg
    }
    elseif("folder (1).jpg" -in $items.name){
        rename "\folder (1).jpg" folder.jpg
    }
}
When the delete line tries to run, I get an error saying that \Music\Folder.jpg wasn't found. How do I get the script to look in whatever folder $folder refers to? I tried doing something like $folder.FullName + "\Folder.jpg", but that threw an error saying I can't use + with delete. What do I need to do to fix this?

Danith
May 20, 2006
I've lurked here for years

hooah posted:

I need to clean up my music directory by removing hidden album art files that iTunes shat everywhere and renaming the originals. I figured this would be as good a time as any to start learning PowerShell. Things were going reasonably well until it came time to modify file properties. I'm running the script from my Music directory, and most of the problematic files are two levels deep, which seems to be throwing off the full path names. Here is my script so far:
code:
$folders = Get-ChildItem -Recurse | Where-Object {$_.PSIsContainer}
foreach ($folder in $folders){
    $items = Get-ChildItem $folder.FullName -Force | Where-Object {!$_.PSIsContainer}
    if("Folder.jpg" -in $items.name -and "folder (1).jpg" -in $items.name){
        del Folder.jpg
        rename "folder (1).jpg" folder.jpg
    }
    elseif("Folder.jpg" -in $items.name){
        attrib -A -H -S Folder.jpg
        rename Folder.jpg folder.jpg
    }
    elseif("folder (1).jpg" -in $items.name){
        rename "\folder (1).jpg" folder.jpg
    }
}
When the delete line tries to run, I get an error saying that \Music\Folder.jpg wasn't found. How do I get the script to look in whatever folder $folder refers to? I tried doing something like $folder.FullName + "\Folder.jpg", but that threw an error saying I can't use + with delete. What do I need to do to fix this?

del (($folder.FullName) + '\folder.jpg') ?

hooah
Feb 6, 2006
WTF?
I closed the ISE, and now it'll only open the script in read-only mode. Why is that?

Also, the above fix doesn't have any syntax errors, but now I get a permission error saying I don't have access rights to do that, even though I'm running the ISE as an administrator, which has full access to the file (as does my own account). What??

hooah fucked around with this message at 14:17 on Dec 9, 2015

Methanar
Sep 26, 2013

by the sex ghost

hooah posted:

I closed the ISE, and now it'll only open the script in read-only mode. Why is that?

Also, the above fix doesn't have any syntax errors, but now I get a permission error saying I don't have access rights to do that, even though I'm running the ISE as an administrator, which has full access to the file (as does my own account). What??

Did the execution policy change for some reason?

Does it happen with the 32 bit version of ise aswell?

hooah
Feb 6, 2006
WTF?

Methanar posted:

Did the execution policy change for some reason?

Does it happen with the 32 bit version of ise aswell?

It did change. The only reason I can think of is that I closed the ISE and the administrator shell. However, the issue persisted after I changed the execution policy back to Remote whatever.

Methanar
Sep 26, 2013

by the sex ghost
Maybe check if there are any zombie ise processes left open in the task manager and force them all to close.

hooah
Feb 6, 2006
WTF?
I didn't find anything starting with ise or power.

The permissions thing was because the file was generated by iTunes, which annoyingly sets it to be a hidden system file. I copied the attrib line and it seems to be working fine now.

hooah fucked around with this message at 17:37 on Dec 9, 2015

nielsm
Jun 1, 2009



A slightly philosophical question:

What is the verb to use, if you want to set some implicit state/defaults for subsequent commands to act on?
Is that even a "sanctioned" idiom in Powershell?

The specific case is that I'm working on a module to manage a set of application config files (XML), where you may need to add, remove, and modify items inside.

I currently read the files, convert the items from the XML representation to simple .NET objects (defined as C# classes), and have some tools to work on those. Then a few commands to update the config files with the new objects. The commands that read/write from files takes parameters to indicate the files to work on, and I have defaults for the cmdlets set up to use the most common config file set.
The idea I have is writing a command that sets some implicit state to control which config file set the other commands will work on, if the file parameters aren't given on each command.


Command names I have considered:

Select-FooConfig - bad, since Select is supposed to be used to pick out elements from a collection, not change state

Enter-FooConfig - seems reasonable, but only common command is Enter-PSSession which effectively changes the entire environment, also implies a stack of states

Push-FooConfig - seems reasonable, main sample is Push-Location which is less whole-environment affecting than Enter-PSSession, also implies a stack of states

Use-FooConfig - maybe? not really any commands I know of that use it

Set-FooConfig - not really, since it doesn't change any permanent state by itself

Zaepho
Oct 31, 2013

nielsm posted:

Command names I have considered:

Have you thought about New-FooConfig You're creating a Configuration object of some sort to pass to all the other Foo cmdlets. like a DB Connection object.

nielsm
Jun 1, 2009



Zaepho posted:

Have you thought about New-FooConfig You're creating a Configuration object of some sort to pass to all the other Foo cmdlets. like a DB Connection object.

Not quite... my intention is to have it be implicit state, similar to the current directory.

Edit: On the other hand, the command aliased to "cd" is Set-Location which doesn't really do permanent state changes either. So maybe Set-FooCurrentConfig would be it.

nielsm fucked around with this message at 15:31 on Dec 10, 2015

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

nielsm posted:

A slightly philosophical question:

What is the verb to use, if you want to set some implicit state/defaults for subsequent commands to act on?
Is that even a "sanctioned" idiom in Powershell?

The specific case is that I'm working on a module to manage a set of application config files (XML), where you may need to add, remove, and modify items inside.

I currently read the files, convert the items from the XML representation to simple .NET objects (defined as C# classes), and have some tools to work on those. Then a few commands to update the config files with the new objects. The commands that read/write from files takes parameters to indicate the files to work on, and I have defaults for the cmdlets set up to use the most common config file set.
The idea I have is writing a command that sets some implicit state to control which config file set the other commands will work on, if the file parameters aren't given on each command.


Command names I have considered:

Select-FooConfig - bad, since Select is supposed to be used to pick out elements from a collection, not change state

Enter-FooConfig - seems reasonable, but only common command is Enter-PSSession which effectively changes the entire environment, also implies a stack of states

Push-FooConfig - seems reasonable, main sample is Push-Location which is less whole-environment affecting than Enter-PSSession, also implies a stack of states

Use-FooConfig - maybe? not really any commands I know of that use it

Set-FooConfig - not really, since it doesn't change any permanent state by itself
I would recommend not storing state in your own variable or whatever. That's not really done in powershell for the most part. When you think about something like Push-Location, it's really modifying the state of the system (at least the current environment; by changing the working directory), so it's not limited to a script.

I think the way to go, as distasteful as it might sound, is to make every cmdlet take the parameter, and make it mandatory.

To set defaults, the caller can use $PSDefaultParameterValues. That really is the idiomatic way to do this in PowerShell.

If you implement a cmdlet, its behavior should probably be to set the relevant value in $PSDefaultParameterValues (this is actually useful, because the caller may not know or want to enumerate all of the commands it applies to). Just make sure you preserve any existing values in the hashtable.

For that purpose, I think Set-FooDefaultConfig is an appropriate name.

nielsm
Jun 1, 2009



Briantist posted:

I would recommend not storing state in your own variable or whatever. That's not really done in powershell for the most part. When you think about something like Push-Location, it's really modifying the state of the system (at least the current environment; by changing the working directory), so it's not limited to a script.

I think the way to go, as distasteful as it might sound, is to make every cmdlet take the parameter, and make it mandatory.

To set defaults, the caller can use $PSDefaultParameterValues. That really is the idiomatic way to do this in PowerShell.

If you implement a cmdlet, its behavior should probably be to set the relevant value in $PSDefaultParameterValues (this is actually useful, because the caller may not know or want to enumerate all of the commands it applies to). Just make sure you preserve any existing values in the hashtable.

For that purpose, I think Set-FooDefaultConfig is an appropriate name.

That's a really good point, and since the config filename parameters I'm already taking have quite unique names, it wouldn't clash with other things either, even if I use wildcard command names in the $PSDefaultParameterValues hash.

Thanks.

nielsm
Jun 1, 2009



Okay another question, regarding the same module. I'm having trouble making my Get-FooItems command take either name, id, or nothing, and also have a "comfortable" syntax.

code:
function Get-FooItems
{
    [CmdletBinding(DefaultParameterSetName="All", PositionalBinding=$false)]
    #[OutputType([FooItem])]
    Param
    (
        [Parameter(Mandatory=$true, ParameterSetName="Id", ValueFromRemainingArguments=$True, ValueFromPipeline=$true)]
        [int[]]
        $Id,

        [Parameter(Mandatory=$true, ParameterSetName="Name", Position=0, ValueFromRemainingArguments=$True, ValueFromPipeline=$true)]
        [string[]]
        $Name,

        [Parameter(Mandatory=$false, ValueFromPipelineByPropertyName=$false)]
        [Alias("File")]
        [string]
        $FooConfigFile = "\\server\with\file.xml"
    )

    Begin
    {
        #$xml = [xml](Get-Content $FooConfigFile)
    }
    Process
    {
        if ($psCmdlet.ParameterSetName -eq "Id") {
            $Id | % {"x$_"} #get object with id
        }
        elseif ($psCmdlet.ParameterSetName -eq "Name") {
            $Name | % {"y$_"} #get objects matching name
        }
        else {
            1,2,3 #get all objects
        }
    }
}
The Get-Help output looks reasonable:
pre:
SYNTAX
    Get-FooItems [-FooConfigFile <string>]  [<CommonParameters>]
    
    Get-FooItems -Id <int[]> [-FooConfigFile <string>]  [<CommonParameters>]
    
    Get-FooItems [-Name] <string[]> [-FooConfigFile <string>]  [<CommonParameters>]
But actually calling the command doesn't work as intended.

Works:
pre:
Get-FooItems                     # gets all
Get-FooItems -Id 1234            # gets a single id
Get-FooItems -Id 123,456         # gets multiple id's
Get-FooItems -Name abcd          # gets by a single name
Get-FooItems -Name abc,def       # gets by multiple names
123,456,789 | Get-FooItems       # gets multiple id's
"abc","def","ghi" | Get-FooItems # gets by multiple names
Fails:
pre:
Get-FooItems abc           # (A) expected to work, get items by single name
Get-FooItems 123           # (B) not expected to work, get item by id
Get-FooItems abc,def       # (C) expected to work, get items by multiple names
Get-FooItems 123,456       # (D) not expected to work, get items by multiple id's
Get-FooItems abc def       # (E) expected to work, get items by multiple names
Get-FooItems 123 456       # (F) not expected to work, get items by multiple id's
Get-FooItems -Name abc def # (G) expected to work, get items by multiple names
Get-FooItems -Id 123 456   # (H) expected to work, get items by multiple id's
Case B, D and F are supposed to fail, since the Id parameter shouldn't be positional.
However cases A, C and E should work, since the Name parameter is positional, the FooConfigFile parameter is not positional, and the -Name flag is supposed to be optional (per the generated help).
Cases G and H should work with values specified as further arguments, since both Id and Name parameters are specified as ValueFromRemainingArguments. Shouldn't they then take any later positional arguments?

12 rats tied together
Sep 7, 2006

e: Nvm, misread. Taking a look.

12 rats tied together fucked around with this message at 22:18 on Dec 11, 2015

Briantist
Dec 5, 2003

The Professor does not approve of your post.
Lipstick Apathy

nielsm posted:

Okay another question, regarding the same module. I'm having trouble making my Get-FooItems command take either name, id, or nothing, and also have a "comfortable" syntax.

code:
function Get-FooItems
{
    [CmdletBinding(DefaultParameterSetName="All", PositionalBinding=$false)]
    #[OutputType([FooItem])]
    Param
    (
        [Parameter(Mandatory=$true, ParameterSetName="Id", ValueFromRemainingArguments=$True, ValueFromPipeline=$true)]
        [int[]]
        $Id,

        [Parameter(Mandatory=$true, ParameterSetName="Name", Position=0, ValueFromRemainingArguments=$True, ValueFromPipeline=$true)]
        [string[]]
        $Name,

        [Parameter(Mandatory=$false, ValueFromPipelineByPropertyName=$false)]
        [Alias("File")]
        [string]
        $FooConfigFile = "\\server\with\file.xml"
    )

    Begin
    {
        #$xml = [xml](Get-Content $FooConfigFile)
    }
    Process
    {
        if ($psCmdlet.ParameterSetName -eq "Id") {
            $Id | % {"x$_"} #get object with id
        }
        elseif ($psCmdlet.ParameterSetName -eq "Name") {
            $Name | % {"y$_"} #get objects matching name
        }
        else {
            1,2,3 #get all objects
        }
    }
}
The Get-Help output looks reasonable:
pre:
SYNTAX
    Get-FooItems [-FooConfigFile <string>]  [<CommonParameters>]
    
    Get-FooItems -Id <int[]> [-FooConfigFile <string>]  [<CommonParameters>]
    
    Get-FooItems [-Name] <string[]> [-FooConfigFile <string>]  [<CommonParameters>]
But actually calling the command doesn't work as intended.

Works:
pre:
Get-FooItems                     # gets all
Get-FooItems -Id 1234            # gets a single id
Get-FooItems -Id 123,456         # gets multiple id's
Get-FooItems -Name abcd          # gets by a single name
Get-FooItems -Name abc,def       # gets by multiple names
123,456,789 | Get-FooItems       # gets multiple id's
"abc","def","ghi" | Get-FooItems # gets by multiple names
Fails:
pre:
Get-FooItems abc           # (A) expected to work, get items by single name
Get-FooItems 123           # (B) not expected to work, get item by id
Get-FooItems abc,def       # (C) expected to work, get items by multiple names
Get-FooItems 123,456       # (D) not expected to work, get items by multiple id's
Get-FooItems abc def       # (E) expected to work, get items by multiple names
Get-FooItems 123 456       # (F) not expected to work, get items by multiple id's
Get-FooItems -Name abc def # (G) expected to work, get items by multiple names
Get-FooItems -Id 123 456   # (H) expected to work, get items by multiple id's
Case B, D and F are supposed to fail, since the Id parameter shouldn't be positional.
However cases A, C and E should work, since the Name parameter is positional, the FooConfigFile parameter is not positional, and the -Name flag is supposed to be optional (per the generated help).
Cases G and H should work with values specified as further arguments, since both Id and Name parameters are specified as ValueFromRemainingArguments. Shouldn't they then take any later positional arguments?

This is a tough one. The parameter binding process sometimes seems to bind things in unexpected ways which leads to ambiguity where there appears to be none. This is compounded by pipeline support, ValueFromRemainingArguments, multiple parameter sets, and positional binding, and you've combined them all!

Some questions, since your intentions are a bit ambiguous to me:

Are those 3 parameter sets you see in the help intended? That is, do you really want a third parameter set where only $FooConfigFile is specified? Or did you actually just want that to be an optional parameter on the other two sets?

Do you really need ValueFromRemainingArguments? I find that this is usually a Bad Idea and it's really only used for script parameters when you have little control over how it's going to be called (some other pre-made thing is going to use spaces to separate an array of stuff and you can't change it). Getting rid of this would simplify it.

nielsm
Jun 1, 2009



Briantist posted:

This is a tough one. The parameter binding process sometimes seems to bind things in unexpected ways which leads to ambiguity where there appears to be none. This is compounded by pipeline support, ValueFromRemainingArguments, multiple parameter sets, and positional binding, and you've combined them all!

Some questions, since your intentions are a bit ambiguous to me:

Are those 3 parameter sets you see in the help intended? That is, do you really want a third parameter set where only $FooConfigFile is specified? Or did you actually just want that to be an optional parameter on the other two sets?

Do you really need ValueFromRemainingArguments? I find that this is usually a Bad Idea and it's really only used for script parameters when you have little control over how it's going to be called (some other pre-made thing is going to use spaces to separate an array of stuff and you can't change it). Getting rid of this would simplify it.

The 3 parameter sets the help shows is as intended, yes. The internal handling differs quite a lot depending on whether I need to get all items, filter by name, or fetch by id, so being able to test on $PSCmdlet.ParameterSetName makes the implementation simpler and easier to follow.
The config file is always needed, since that's where it fetches the data from.

ValueFromRemainingArguments is not strictly necessary, I just thought it would be neat if you could filter by multiple names or fetch multiple id's without worrying about commas. But it seems to be more trouble than it's worth, so I may as well scrap it.
Pipeline support also isn't strictly needed, at least not for the Name parameter. It seems most relevant for the Id parameter, although I'm not sure if there are any use cases for that either.

The most important case of the failed ones is A, and I don't understand why it fails. There should be exactly one possible call of the function that takes one positional parameter, and that's the "Name" parameter set.

12 rats tied together
Sep 7, 2006

I took a look at it. You can remove ValueFromRemainingArguments from either one of your parameter set names and it works and does more or less what you would expect. It's all hosed though. If you specify -Name abc,def you get "yabc" and "ydef" on newlines, if you just drop in "abc def" you get "yabc def". I would drop the whole thing and have 2 functions because then everything would work the way you and everyone else expects.

Might be worth keeping in mind that if all you need to do is work with an xml document, ConvertFrom-XML exists so this function is probably redundant. Also if you can't just have Get-FooItemsByName and Get-FooItemsById, I would skip parameter sets entirely because they are stupid and just check to see if your input variable is a string or an int and then use that to determine whether you check for names or ids.

PBS
Sep 21, 2015

Reiz posted:

I took a look at it. You can remove ValueFromRemainingArguments from either one of your parameter set names and it works and does more or less what you would expect. It's all hosed though. If you specify -Name abc,def you get "yabc" and "ydef" on newlines, if you just drop in "abc def" you get "yabc def". I would drop the whole thing and have 2 functions because then everything would work the way you and everyone else expects.

Might be worth keeping in mind that if all you need to do is work with an xml document, ConvertFrom-XML exists so this function is probably redundant. Also if you can't just have Get-FooItemsByName and Get-FooItemsById, I would skip parameter sets entirely because they are stupid and just check to see if your input variable is a string or an int and then use that to determine whether you check for names or ids.

Works fine for me.

PS C:\Users\PBS> Get-FooItems asdf fda
yasdf
yfda

Not really sure why it cares if you have valuefromremainingarguments set on both params when there's only one Arg. Probably wouldn't work even if that wasn't an issue though.

12 rats tied together
Sep 7, 2006

Actually this is kind of interesting.

Drop the valuefromremainingargs from the first parameter set:

code:
PS C:\Users\Rob> Get-FooItems abc def
yabc
ydef

PS C:\Users\Rob> Get-FooItems abc,def
yabc def
Drop it from the second set:
code:

PS C:\Users\Rob> Get-FooItems abc,def
yabc
ydef

PS C:\Users\Rob> Get-FooItems abc def
Get-FooItems : A positional parameter cannot be found that accepts argument 'def'.
At line:1 char:1
+ Get-FooItems abc def
+ ~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (:) [Get-FooItems], ParameterBindingException
    + FullyQualifiedErrorId : PositionalParameterNotFound,Get-FooItems
Might be different in console vs ISE, too. I never cared enough to really investigate why/how that happens.

PBS
Sep 21, 2015

Reiz posted:

Actually this is kind of interesting.

Drop the valuefromremainingargs from the first parameter set:

code:
PS C:\Users\Rob> Get-FooItems abc def
yabc
ydef

PS C:\Users\Rob> Get-FooItems abc,def
yabc def
Drop it from the second set:
code:

PS C:\Users\Rob> Get-FooItems abc,def
yabc
ydef

PS C:\Users\Rob> Get-FooItems abc def
Get-FooItems : A positional parameter cannot be found that accepts argument 'def'.
At line:1 char:1
+ Get-FooItems abc def
+ ~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (:) [Get-FooItems], ParameterBindingException
    + FullyQualifiedErrorId : PositionalParameterNotFound,Get-FooItems
Might be different in console vs ISE, too. I never cared enough to really investigate why/how that happens.

You mean remove it from both or leave it active in the first set and disable it in only the second set?

12 rats tied together
Sep 7, 2006

In my first post I said 'remove it from either', so yeah, just clarifying that it technically sort of works, technically sort of differently, depending on which set you remove it from.

I'm sure you can make some assumptions about the problem and the underlying reasons for the problem by comparing the two sets of output, but I think the real takeaway here is don't waste your time and just code it differently (or not at all if it can be avoided).

Dr. Kayak Paddle
May 10, 2006

Swink posted:

I often export mailboxes from exchange using New-MailboxExportRequest. I then check the progress by rnning Get-Mailboxexportrequest before going on with the next step in the task. (usually delete mailbox)

How can I write a script that will perform the export and wait for the export to complete before running another command?

I use Get-MailboxExportRequestStatistics to get a nice status and percent complete output.

Swink
Apr 18, 2006
Left Side <--- Many Whelps
Here's an easy one that for some reason I am just not seeing the solution to. I'm blaming holiday booze.

I want to:

*Get a list of mailboxes
*If those mailboxes dont already have an Out of Office reply set,
*apply my generic out of office reply.


code:
$users = Get-Maibox -OrganizationalUnit "OU=Staff,DC=company,DC=com"

foreach ($user in $users) {

if (get-mailboxAutoReplyConfiguration $user.mailbox -AutoReplyState -ne "disabled") { # <-- this part is not right at all

Set-MailboxAutoReplyConfiguration -Identity $user -AutoReplyState Scheduled -StartTime "12/23/2014 5:30pm" -EndTime "01/04/2015 5:00pm" -ExternalMessage $replytext -ExternalAudience:All -internalMessage $null

}
}

Phone posting but I think that syntax is ok.


Also bonus for any Melbourne goons: There's a Melbourne Powershell usergroup firing up (finally). This dude is running it: https://twitter.com/david_obrien

nielsm
Jun 1, 2009



Swink posted:

I want to:

*Get a list of mailboxes
*If those mailboxes dont already have an Out of Office reply set,
*apply my generic out of office reply.

Parenthesize the command in the if statement and pull out the property from that:
code:
if ((Get-MailboxAutoReplyConfiguration -Identity $user.guid).AutoReplyState -eq Disabled) {
  ...
}

Roargasm
Oct 21, 2010

Hate to sound sleazy
But tease me
I don't want it if it's that easy
I'm making a certificate expiry monitor in powershell + .net

PHP code:
$mailMessage = ''

[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true} #ignore warnings from self signed certs
$webClient = New-Object System.Net.WebClient #.net browser object
$targetSites = get-content c:\httpssites.txt
$warnBeforeDays = 500 

foreach ($targetSite in $targetSites) {
$servicePoint = [System.Net.ServicePointManager]::FindServicePoint("$targetSite") #get cert info
$webClient.DownloadString("$targetSite") > $null
$expDate = $servicePoint.Certificate.GetExpirationDateString()
$daysTilExp = (New-TimeSpan -Start (get-date -Format g) -End $expDate).Days     
   
    if ($daysTilExp -le $warnBeforeDays) {
    $newData = "Site: $targetSite `n", $servicePoint.Certificate.Issuer, $servicePoint.Certificate.Subject, 
    "`nPublic Key Algorhythm:", $servicePoint.Certificate.GetKeyAlgorithm(), 
    "`nExpires on $expDate `n`n"

    $mailMessage += $newData
    }

}

echo $mailMessage

Roargasm
Oct 21, 2010

Hate to sound sleazy
But tease me
I don't want it if it's that easy
^ Seeming like doing this for FTP SSL certificates is a lot harder than doing it for an https connection :( Anyone have any pointers? Any way to get FTP SSL cert info without passing in a username/password?

Adbot
ADBOT LOVES YOU

GPF
Jul 20, 2000

Kidney Buddies
Oven Wrangler
This isn't an answer to anyone's deep question, or a question about some odd thing. This is me being happy.

Due to restrictions on our workstations and servers, I've been locked into the Windows 7/Server 2008 R2 gimped PowerShell 4 for some time. I do tons of automation of DHCP and Printing infrastructure, among other things.

We finally moved our DHCP server to 2012 R2 last week. Today I had the opportunity to experiment. I have tons of PS code full of 'netsh dhcp' commands that pull lists, mangle them to shreds to get the info I need which fires off more 'netsh dhcp' commands. One certain automation piece I wrote is 75-80 lines of code to look for leases in the printer scopes, check to see if they are printers, and, if they're actually printers in a database, turn them into reservations, update that database, and change the port on a print server to the new IP.

After 10 minutes just messing around with 'help dhcp' and doing some minor experimentation, I estimate I'll be able to cut that automation down to approximately 5 lines or less to do the hard parts. And, it'll run faster and with fewer errors.

I'm happy.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply