Category Archives: PowerShell

WebLogic 12 Unattended Install And Patch Scripts Up

I am back finally with something to contribute to the PeopleSoft community.  The WebLogic unattended installation and patch scripts for WebLogic 12 are up at my GitHub.  You can get the install script here and the patching script here.  Patch automation has been switched to opatch in line with the change to the Fusion Middleware model.  Be warned both scripts are still kind of hot off the press and could use better error handling.  I will get there soon I hope.  I forgot that I didn’t include the response file so I will have to add that later but you can find Oracle’s guide on creating one here, which is pretty helpful.  There’s not too many options.

I’ve also committed a script that does the Windows service installation of the MicroFocus COBOL runtime license service for PeopleSoft.  You can download that script from GitHub here.  Should save some time when setting up new machines since I can never remember the steps to install the service.

Now that we are finally starting to look at an upgrade to Tools 8.54 I’ve kind of been trying to get everything lined up and ready to go so we can have as much of an automated install as possible.  To that end, I’ve been experimenting with shared PS_HOMEs, which is not really a new feature but has supposedly been ironed out quite a bit in 8.54.

The advantage to doing the PS_HOME this way is that instead of having 11(or however many application server) PS_HOMEs to patch each time you apply a bundle or patchset, you patch one and every server in that environment is upgraded automatically at the same time, since they all use the same PS_HOME.  This will be a huge time saver and reduce the amount of places where something can go wrong.

I built a demo environment over the past week or so to test things out and despite Oracle’s claims that a lot of the bugs have been worked out I didn’t really see it that way.  You can read more about the shared PS_HOME process for Windows on Oracle’s site, but the important things to take away are:

  • Make sure your Tuxedo and PeopleSoft services are set to start as a domain service account.  Otherwise, they won’t have access to the network share.
  • You must use a mapped drive to get Tuxedo to work with a shared home, but your actual PS_HOME variable seems to require a UNC path.  I don’t know why this discrepancy exists.
  • Make sure you set your TM_TUXIPC_MAPDRIVER environment variable as a systemwide environment variable.  If you want Tuxedo and your domains to start with no user logged on there is no other way.
  • This might be an error in the way I have things set up (even though the app server and process scheduler domains start correctly), but using the shared homes I cannot get psadmin to work.  It will always show the domain status as not started.  I don’t know if this is because I am not logged on to the service account when running psadmin, but it is going to change the way a lot of our other scripts work if we can’t use psadmin.
  • Be prepared for your domains to take a long time to start.  In my demo environment it takes about 7 minutes to bring a single app server domain and process scheduler domain up.  Your Windows service will say it timed out but it is really still loading in the background.
  • You will want to seriously consider using Server 2012R2 file servers with SMB 3 in a cluster for your file server so you get a highly available file share.  I shut the file server down once just to see what would happen and it crashes the PeopleSoft environment.
  • Make sure your PS_CFG_HOME is on a local drive.  You don’t want to be writing logs back to the network location.  Also, make sure your service account running Tuxedo and PeopleSoft has write access to PS_CFG_HOME or your domains won’t start.

I am still ironing out the kinks in the shared PS_HOME, but the ease of patching and the disk space savings are well worth the time it takes to figure it out.

One last thing – to readers who asked how the license key was specified in the unattended installation of PeopleTools.  I told you it was only needed to determine which database type was used, but that information might be wrong.  There appears to be a step during the database configuration where the key is required in order to be added to a particular table. I’m guessing this may be why unattended installation of Tools is not supported.  So close, yet so far away.  It may be possible to fix this, but I am not willing to push it because it may violate a license agreement and I don’t want Oracle after me – I can’t afford that.  Sorry to have misled people, but together we can make Oracle support unattended installs if we push them hard enough.

Tab Clearing And Office 365 ProPlus Provisioning

Well, I am back again.  Seems like every time I re-enable my login page after it disables itself due to brute force attacks I just lose the desire to post for a while.  So, today I installed BruteProtect which should help out a bit with that.  Hopefully, it will be less management overhead for me.

If you work with Office 365 for Education at all, you probably heard the news today that Microsoft is making Office 365 ProPlus available for students free.  You’ll notice the subtitle of that article being “New self-serve process skips IT, makes it easier to get free Office 365”, which I don’t understand why Microsoft would do.

We already have been automatically provisioning ProPlus licenses to enrolled students via PowerShell using a script I wrote, so who knows what would happen if those students went and enrolled themselves.

This behavior can be disabled, of course, using the following PowerShell commands after you connect to your tenant using Connect-MsolService:

Set-MsolCompanySettings -AllowAdHocSubscriptions $false
Set-MsolCompanySettings -AllowEmailVerifiedUsers $false

The problem with this is that it requires the latest version of the Azure Active Directory module.  Now, unlike the previous versions, these cmdlets can only run in versions of PowerShell higher than version 2.0.  So, if your DirSync server runs 2008 R2 like ours, you need to upgrade your PowerShell.  No big deal to upgrade, right?  I thought I’d even go to PowerShell 4, because why not?

Unfortunately, there is a very particular order that v4 needs to be installed in for a 2008 R2 machine.  Basically, you need to install the standalone .NET 4.5.1 software BEFORE you install the .msu for PowerShell v4 or it doesn’t work properly.  If you screw up and just do the v4 install first, you need to remove it and start over.  There’s no indication while it is applying the update that you’re missing the required .NET pieces, it just installs successfully and then fails to work.  You’d think Microsoft could throw a prerequisite check in there just to be nice but I guess not.  Anyway, there’s a nice writeup about the proper process you can read here.

That’s about all for this installment of Other Duties As Required though.  Stay tuned for the next episode!

C# Folder Size, Or Maybe I Am Ignorant

I apologize for the lack of posting recently – since it is Fall registration we are in a freeze which means no changes to anything until it’s over.  Always a good time to research something that I might need later.  I’ve been paging through Advanced Windows Debugging but I’m not enough of a programmer to understand everything.

Additionally, my site was under some sort of attack which ended up with my host disabling the log-in page for WordPress.  Good for them, I was glad they did.  Not sure what caused that, but you can be assured none of your personal information was stolen, since I don’t keep any, and the system wasn’t compromised anyway.

In any case, I was looking at a question on Reddit concerning PowerShell and getting folder size, create, and modify times from a script.  Problem being, the pathing could be more than 260 characters.  If you’ve never run in to the problem, here is the background.  While the poster did finally manage to get the file size using robocopy, I suggested he use .Net to get creation and modified dates.  This is fairly straightforward:

$CreateTime = [System.IO.File]::GetCreationTime($i)
$LastModified = [System.IO.File]::GetLastWriteTime($i)

where $i is the file or folder name.

But then I got to thinking – surely there must be a better way than robocopy to get file sizes for long paths?  Let’s take a look at .Net for this too…

Since GetCreationTime and GetLastWriteTime seem to work on folders as well despite the library being named “File”, I thought the following code would work too:

$files = [System.IO.Directory]::GetFiles($i)
foreach ($file in $files)
$size += $file.Length;

But, for some reason, this will only get the size of the files inside the directory, which doesn’t include any folders that might be under it.  You actually need to iterate through the whole source directory to get everything that might be there in order to get the total size.  It appears that this is a limitation of the Windows API.

Now, that being said, I am not a C# developer by any means, so there might be an easy way to do this that I don’t know.  It’s just that it seems inconsistent to me given the naming conventions between the two libraries.  I can get folder modified times and creation times using a File call, but not the sizes of a root folder.  Why?

If Get-ChildItem could lose its 260 character limitation that would probably help out a lot.  I suppose you could use PSDrives to shrink the path, but sometimes that wouldn’t be an option given the structure of a file system.

PowerShell Remoting HTTPS Group Policy Configuration

Edited to add – This document apparently only works with Windows Server 2008 R2.  When tested with Server 2012 R2 the steps fail, so keep this in mind.

…and the first restoration is the PDF I wrote about PowerShell Remoting configuration with HTTPS and Group Policy, including ACLs.  The reason I started the old blog; you didn’t think I’d lose this file did you?

For your viewing pleasure, download here.