Notes

Content is based on this article on yubico documentation found here:

Requiremetments

  • OpenSSH  (v9.4+) found here
  • Git (v2.42+) found here
  • Gitlab account configured (no matter if Gitlab.com or if you have CE (community edition) , I used it for proprietary CE  and it worked so it should work with not problem with online version)

Configuration

On existing reporsitory you need to adjust that SSH will be used instead of PGP

git config gpg.format ssh

Generate new key that supports Yubikey (ECDSA key). During process you will be asked to select device for signing and confirm generation of key with touch on your Yubikey.

ssh-keygen -t ecdsa-sk -f c:\users\yourusername\.ssh\id_ecdsa_sk

Setup to use key for signing

git config user.signingKey c:\users\yourusername\.ssh\id_ecdsa_sk

If you have more ssh keys you should aready have config file here c:\users\yourusername\.ssh\config, add for destination Gitlab server newly created private key, this ensures that our new key will be used.

Copy content of  c:\users\yourusername\.ssh\id_ecdsa_sk.pub to c:\users\yourusername\.ssh\allowed_signers,

Run command to define allowd_signers

git config gpg.ssh.allowedSignersFile ~/.ssh/allowed_signers

Paste public part also to Gitlab account.

And that’s it! If i did not forget on anything, this should be enough and

Umbrel – notes and hints

I’m starting to research what can be done with Umrel service. I initially start to deal with it due to configuration of Bitcoin node and eventually learn how to deal with Lightning node. It was a big suprise, how much thigs can be installed as docker image over Umrel. I think that this servie worth to research so here is link to the project: https://umbrel.com/

How install umbrel app over command line

Sometimes, installation of umbrel app might failed and the reason cannot be found on the first seen. To get over it you might try to find app over shell, as result you might see an error message.

 

cd umbrel

sudo ./scripts/app install pi-hole

Reference: https://www.youtube.com/watch?v=2DFwWKp0nVo

Redirect Bitcoin Node with addional storage

Even Umrel can be configured over Raspberry 4. Due to lack of such hardware i rather decide to use my proxmox server and use Ubuntu to get it work. Everything was quite straightforward. The only issue i found is that Bitcon app does not have explicitly customizable storage that is used for Bitcon Node app data. Blockchain size is at time of writing this article about 500 GB. Therfore i have prepared separated virtual storage just for this purpose.

Prepare disk

Let´s assume that disk is physically connected to your Ubuntu system. You would need to logon to console with root or use sudo su to switch to root.

For easir setup i would suggest to install gparted and open it with following command:

 

apt install gparted

gparted

Once you open gparted, swith in right combo box device you want to adjust. In my case it was /dev/sdb with 1TB  size. You would need to create new partition table with selecting Device-> Create Partition Table and  on newly attached disk and create ext4 filesystem. After selecting everything to desired state click on Apply All Operation (green circle with check mark) for apply changes.

 

Gparted steps

It is very helpful to get UUID of partition that can be used for automatic mount of disk to filesystem. So right-click partition and select Information option to find out this number.

gparted information

Now just copy this UUID to you notes, you will need it in next steps!

UUIDinfo

Now we can configure mount file to allow automatic mount of newly created partition. Let´s create mount folder and set ownership of user where Umbrel is used.

Replace:johndoe with real name of user!!!

Note:Not sure if it is neded but i rather changed ownership of folder itsel as well as  mounted disk. chown might not be needed but since i did not try it i rather writing it here to be sure that guide is working for sure. 

Suppose the mount folder will be /media/data

mkdir /media/data

chown -R johndoe:johndoe /media/data

Edit fstab with your favouring editor.

nano /etc/fstab

Add there line for mounting for me working this (please adjust UUID to yours copied from gparted:

 

UUID=5b7fe1b3-93c8-4399-8a05-ef8653b2d13b /media/data ext4 defaults 0 2

After saving the file exit editor and perform following steps, mounting of partition and ensuring that disk is owned by user used for Umbrel.

mount /media/data

chown -R martin:martin /media/data

Adjust docker of Bitcon Node app

Installation of Umbrel is usually in the User Home foder. Once you install Bitcoin Node app you would need to adjust yaml file for Bitcoin Node app. In home folder you should have umbrel folder, you shoudl successfully with this commands that perform stop of docker service and open yaml file where path need to be adjusted. Please ensure that you have console opened in context of user under which you have installed umbrel otherwise command cd will swithc to actively used profile!!!

Please back your yml file before edit it!

cd

cd umbrel/

./scripts/stop

cd

cd umbrel/

cd app-data/

cd bitcoin/

nano docker-compose.yml

Adjust in cocker-compose.yml file locations regarding to service (here I´m not user if it was needed but rather check it).

docker-compose.yml service

What i definelty need to adjust is bitcoinid section where i modified path in volumes from ${PWD}:/data/.bitcoin  to /media/data:/data/.bitcoin. And added line with /dev/sdb1:/dev/sdb1 to ensure that docker can find this device.

docker-composer.yml

After saving docker-compose.yml file swithc back to umbrel folder and start docker service.

 

cd

cd umbrel/

./scripts/start

 

If all goes well you should see that Bitcoin Node docker was started sucessfully. If something went wrong you should see it in console.

 

Electrs

This docker implementation does not work once you move db to different location.

 

Sympthoms:

Electrs dashboard writes still  „waiting for bitcoin node to finish syncing“

When you run command:

~/umbrel/scripts/app compose electrs logs”

you get error:

failed to open bitcoind cookie file: /data/.bitcoin/.cookie

electrs_1 | 1: No such file or directory (os error 2)

electrs_1 | [2023-05-24T18:20:54.900Z INFO electrs::db] "/data/db/bitcoin": 17 SST files, 0.000013685 GB, 0.000000017 Grows

Solution:

Adjust Line in docker-compose.yml for Electrs app:

Under volumes:

Line:

– „${APP_BITCOIN_DATA_DIR}:/data/.bitcoin:ro“

To:

– „/media/data:/data/.bitcoin:ro“

 

Add (device name depends on where you have external drive for bitcoin app db)

devices:
– /dev/sdb1:/dev/sdb1

I found this solution here: https://community.getumbrel.com/t/electrs-server-not-synchronizing-when-bitcoin-blocks-stored-externally-on-ssd-drive/11230

 

This hint if very straightforward but migt be helpful if you don’t have any tool for full text search in files.

In this case i was looking for string „JoeDoe“ in folder C:\Test

get-childitem -Path "C:\Test" | Select-String "JoeDoe"

Result is:

get-childitem -Path "C:\Test" | Select-String "JoeDoe"

C:\Test\file1.txt:2:JoeDoe
C:\Test\file2.txt:6:JoeDoe

 

As you can see it will show file where was text found .

Graph API script for get lastpasswordset.
import-module msal.ps
$cert = Get-ChildItem -Path Cert:LocalMachine\MY | ?{$_.Thumbprint -eq "B5****3" }
$token = Get-MsalToken -ClientId 12345678-1234-1234-1234-12344567489 -ClientCertificate $cert -TenantId 1234567-1234-1234-1234-123456789156
$allusers=@()
$users = Invoke-RestMethod -Headers @{Authorization = "Bearer $($token.AccessToken)" } -Uri 'https://graph.microsoft.com/v1.0/users?$select=userPrincipalName,mail,lastPasswordChangeDateTime' -Method Get
$allusers = $allusers + $users.value
while($users.'@odata.nextLink' -ne $null)
{
$users = Invoke-RestMethod -Headers @{Authorization = "Bearer $($token.AccessToken)" } -Uri $users.'@odata.nextLink' -Method Get
$allusers = $allusers + $users.value
}
#Returns users from aggregated varriable
$allusers

Here is example how handle situation when limit is reached (I had limit on 100 objects in Get response). Following code shows how process each response and use @odata.nextLink to get another set of results. This article is not big deal, it is rather my note for next learning of Graph API :).

 

$allusers=@()


$users = Invoke-RestMethod -Headers @{Authorization = "Bearer $($token.AccessToken)" } -Uri 'https://graph.microsoft.com/v1.0/users' -Method Get

$allusers = $allusers + $users.value

while($users.'@odata.nextLink' -ne $null)
{
$users = Invoke-RestMethod -Headers @{Authorization = "Bearer $($token.AccessToken)" } -Uri $users.'@odata.nextLink' -Method Get
$allusers = $allusers + $users.value 
}

#Returns users from aggregated varriable 

$allusers

Here is useful commandlet which generates report of scheduled tasks:

 

schtasks.exe /query /s localhost  /V /FO CSV | ConvertFrom-Csv | Where { $_.TaskName -ne "TaskName" } | select TaskName,"Next Run Time",Status,"Logon Mode","Last Run Time","Last Result",Author,"Task To Run","Start In",Comment,"Scheduled Task State","Run As User" | Export-Csv -Encoding UTF8 -Path C:\pathtocsv\file.csv -NoTypeInformation

 

Note: Just modify Path variable to valid path, where csv file will be written.

The main reason of doing this is fact that GA users has by default enabled MFA which basically blocks automations initated from onpremise to AzureAD. There are few workarounds how to do that and this article just describes one workaround. I would like to point out that I just tried already existing article so I´m not author of this. Be aware that loose control of certificate will cause security breach and attacker can use assigned role!

Before continue run Powershel ISE with elevated credentials (rather becase we will do activities with certificates and local disc modification and you might get stucked in the middle of action due to insufficient permissions.

You would need to connect to Azure AD. You can do it also with Powershell Module AzureADPreview. Wheather you use defulat AzureAD or AzureADPreview you would need to connect to AzureAD with following command.

import-module azuread 
# or import-module azureadpreview
connect-azuread -tenantID IDoftenant

Then define $pwd variable which contains password for pfx container where will be stored pfx certificate.

$pwd = "plaintext password"

Create foder where will be certificate stored (in below code i used C:\temp ).

Now let´s create certificate, plese define -DnsName value with your registered tenant domain

$notAfter = (Get-Date).AddMonths(6) # Valid for 6 months
$thumb = (New-SelfSignedCertificate -DnsName "mytestdomain.onmicrosoft.com" -CertStoreLocation "cert:\LocalMachine\My" -KeyExportPolicy Exportable -Provider "Microsoft Enhanced RSA and AES Cryptographic Provider" -NotAfter $notAfter).Thumbprint
$pwd = ConvertTo-SecureString -String $pwd -Force -AsPlainText
Export-PfxCertificate -cert "cert:\localmachine\my\$thumb" -FilePath c:\temp\examplecert.pfx -Password $pwd

 

Load certificate to be able to use it in AzureAD.

$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate("C:\temp\examplecert.pfx", $pwd)
$keyValue = [System.Convert]::ToBase64String($cert.GetRawCertData())

 

Please modify parameter IdentifierUris to reflect your tenant name. Adjust Displaynema and CuctomeKeyIdentifier if you want.

$application = New-AzureADApplication -DisplayName "test123" -IdentifierUris "https://mytestdomain.onmicrosoft.com"
New-AzureADApplicationKeyCredential -ObjectId $application.ObjectId -CustomKeyIdentifier "Test123" -Type AsymmetricX509Cert -Usage Verify -Value $keyValue -EndDate $notAfter

 

Prepare service principal for new application.

$sp=New-AzureADServicePrincipal -AppId $application.AppId

 

Now let´s add role to our new application. You can select whatever role you want, you can list roles with following command. In my case i needed GA permisisons so what i was looking for is role named „Global Adminis“.

 

 Get-AzureADDirectoryRole

 

Here is command to add desired role.

 

Add-AzureADDirectoryRoleMember -ObjectId (Get-AzureADDirectoryRole | where-object {$_.DisplayName -eq "Global Administrator"}).Objectid -RefObjectId $sp.ObjectId

 

Now you can use following command to connect to tenant. You need to replace your values here:

  • YourTenantID (just your tenant ID you can find in Azure Portal)
  • YourApplicationID (in my case i could use following command to get it Get-AzureADApplication -SearchString „test123“)
  • YourCertThumbprint (you can find it in computer Personal – Certificates, doubleclick on the certificate, click on tab Details and get thumbprint in Field Thumbprint)

 

 

Connect-AzureAD -TenantId YourTenantID -ApplicationId YourApplicationID -CertificateThumbprint YourCertThumbprint

 

 

Reference: https://learn.microsoft.com/en-us/powershell/azure/active-directory/signing-in-service-principal?view=azureadps-2.0

 

Backup and Restore

Backup

From Windows 10 OS, you can do backup with wbadmin command. This is example how perform backup to UNC path  however TargetBackup can be even local USB attached disk.

 

wbadmin start backup -backupTarget:\\wds1\winback\win10_2 -include:C: -allcritical -quiet

Restore

Note: recovery worked only in situation that identical disk has been used, when I used the new one disk it did not work and i tried to use wbadmin start sysrecovery but this did not work either. When i start recovery the error 0x80042308 object not found…So I endup to use WinRE (recovery from original installation media in order do recover system to new disk), where I could use backup from Backup section in this article.

For recovery (clonning) just boot from Windows installation media (important is to have the same installation media architecture and OS to ensure that recovery is possible). In my case i used WinPE if you have (be aware of that backup feature is not possible over WinPE, only partition recovery is possible)

For list of available backup versions run:

wbadmin get versions -backuptarget:\\wds1\winback\win10_2 -machine:DESKTOP-9CL3JDP

Output will show version parameter which needs to be added for recovery:

In this case Version is 07/31/22-08:37 so command for recovery will be:

wbadmin start recovery -version:07/31/2022-08:37 -itemtype:Volume -items:C: -BackupTarget:\\wds1\winback\win10_2 -machine:DESKTOP-9CL3JDP -quiet

After authentication resovery process should b started .

 

Details about WBAdmin backup and restore are here:

https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/wbadmin-start-backup

https://docs.microsoft.com/en-us/windows-server/administration/windows-commands/wbadmin-start-recovery

Pretty usefull to scrample part of strong password with MD5. Attacker who is trying dictionary attack on password combined with weak password and MD5 has of other weak password has dificult times 🙂 .

 


░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
░░░┌────────────┬──────────────┐░░░░░░░░
░░░│░░░░░░░░░░░░│░░░░░░░░░░░░░░│░░░░░░░░
░░░│ Weak pwd1  │ MD5 of pwd2  │░░░░░░░░
░░░└────────────┴───────┬──────┘░░░░░░░░
░░░░░░░░│░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░│░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░│░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░│░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░│░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░Weak pwd1░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░░░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░░░░░┌───────────┴────────┐░░░░░░
░░░░░░░░░░░░│░░░░░░░░░░░░░░░░░░░░│░░░░░░
░░░░░░░░░░░░│ MD5 hasher         │░░░░░░
░░░░░░░░░░░░└───────────┬────────┘░░░░░░
░░░░░░░░░░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░░░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░░░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░░░░░░░░░░░░░░░░░│░░░░░░░░░░░░░░░
░░░░░░░░░░░░░░░░░Weak pwd2 ░░░░░░░░░░░░░
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░

 

Add-Type -AssemblyName Microsoft.VisualBasic
$someString = [Microsoft.VisualBasic.Interaction]::InputBox("Enter string to be hashed ", "String2MD5 convertor", "String")

$md5 = New-Object -TypeName System.Security.Cryptography.MD5CryptoServiceProvider
$utf8 = New-Object -TypeName System.Text.UTF8Encoding
$hash = [System.BitConverter]::ToString($md5.ComputeHash($utf8.GetBytes($someString)))
$hash.Replace("-","").tolower()
Set-Clipboard -Value $hash.Replace("-","").tolower()