Sometimes when working in posh console I need to see what GIT branch I use. After a while I found similar question on StackOverflow. I tried it, but noticed that evaluating the prompt function is rather slow.

Shortly it turned out that it's because of the code that evaluates different files. So, I removed the code and everything was fine. However, I was still wondering if it is possible to find the different files, but have a quick prompt at the same time.

Background jobs might help

One possible solution comes with background jobs. Simply start the job at the beginning of the session and query the job everytime prompt is evaluated. An example with querying every 10 seconds continues:

# this code would be in profile file
# but as a demo, just copy&paste it into your console
Start-Job -name prompthelp {
  $cli = New-Object Net.WebClient
  while($true) {
    "{0}-{1}" -f (get-date), $cli.DownloadString('').Length
    Start-Sleep -sec 10

function prompt {
  $lengths = @(Receive-Job -name prompthelp)
  if ($lengths) {
    $global:__lastPrompt = $lengths[-1]
  # now $global:__lastPrompt contains last item even when Receive-Job
  # didn't return anything, because previous prompt removed it
  if ($global:__lastPrompt) {
    "$global:__lastPrompt> "
  } else {
    "?> "

And the following code simulates slowness of the original prompt function from SO.

function prompt {
  $cli = New-Object Net.WebClient
  "{0}-{1}" -f (get-date), $cli.DownloadString('').Length

Bear in mind that asking for google home page length is really fast compared to the git diff-index command.

Meta: 2011-06-12, Pepa

Tags: PowerShell

This post is translation of my article for czech Technet Flash Magazine.

Today I'd like to show you a module that got inspiration from other languages and that is mostly used for build automation. Besides that you can use it in many other situations when you need to perform several steps, one after another, that depend on each other. You can download the module from; and now it is obviously clear that its name is psake.

Why psake and not msbuild?

Msbuild can be used for build automation as well, right? The answer is simple: if your hobby is editing xml files, then you don't mind when you have to configure msbuild. For the others there is psake. Psake (or PowerShell) offers capabilities of scripting language that you can hardly find in xml-based languages. Msbuild of course offers many features, so the best choice is to combine PowerShell and psake together.

Besides that you can use psake for administration tasks. Everything what you can find in PowerShel, can be of course used in psake. It's because psake is written in PowerShell. The following examples will tell you more.

First steps

Go to psake homepage and after click on Download choose last item with label 4.00. Then unpack the downloaded zip file into a directory (e.g. c:\psake). Run PowerShell console and continue with:

PS> Set-Location c:\psake
import-module .\psake.psm1
# psake modul is imported now

Get-Help Invoke-psake -Full

Psake is very well documented module. That's why the last command shows parameters that you can use when calling Invoke-Psake. At the end you can find some examples from which you can pretty well imagine how psake works.

Simple example

I'll show you a very simple example. What it does:

  1. It copies directoryd:\temp\code to d:\temp\codebak (no recursion for the sake of simplicity).
  2. It lists all the files from d:\temp\codebak and stores their names to d:\temp\codebak\files.txt.
  3. It executes a commit command to VCS.

You can see that each of the steps follows the previous one. Usually it doesn't make sense to perform last two if the first one failed. However, in some situations you might need to run them in separate. Ok, your psake script would consist of these tasks:

task default -depends Full
task Full -depends Backup, ListFiles, Commit
task Backup {
  Write-Host Backup
  gci d:\temp\code | ? { !$_.PsIscontainer } | copy-Item -destination d:\temp\codebak 
task ListFiles {
  Write-Host Files list
  gci d:\temp\codebak | Select -exp FullName | sc d:\temp\codebak\files.txt
task Commit {
  Write-Host Commit
  Start-Process GitExtensions.exe -ArgumentList commit, d:\temp\codebak

Save this script as psake-build.ps1 and run. If you don't specify task name, psake uses task with name default:

PS> Invoke-psake -buildFile d:\temp\psake\psake-build.ps1 
Executing task: Backup
Executing task: ListFiles
Files list
Executing task: Commit

Build Succeeded!

Build Time Report
Name      Duration
----      --------
Backup    00:00:00.1396781
ListFiles 00:00:00.0548712
Commit    00:00:00.3255901
Full      00:00:00.5288634
Total:    00:00:00.6099274

After the script finishes you will see a window with GitExtensions prepared and waiting. Notice the nice summary at the end!

In case you would like to execute only some tasks, specify them as value for parameter -taskList:

PS> Invoke-psake -buildFile d:\temp\psake\psake-build.ps1 -task Backup, Commit

If any of the tasks fails or throws an exception, the script is stopped and the following tasks are not executed. You can try it very easily if you try to copy nonexisting directory: $codeDir = 'd:\temp\doesntexist':

PS> Invoke-psake d:\temp\psake\psake-build.ps1
Executing task: Backup
psake-build.ps1:Cannot find path 'D:\temp\doesntexist' because it does not exist.

In case the error is not serious and it is perfectly ok to continue, just use parameter -ContinueOnError:

PS> Invoke-psake d:\temp\psake\psake-build.ps1
Executing task: Backup
Error in Task [Backup] Cannot find path 'D:\temp\doesntexist' because it does not exist.
Executing task: ListFiles

Psake Parameters

As I said earlier, Psake script is written in PowerShell. Therefore we can save name of directories in script variables so that the script will be much clearer and more maintainable.
If you look into some psake scripts, you will see sometimes properties { $var1 = 'value'; ... }; should you use it?

If the directory names are only constants and there is no need to change them, use any approach. Variable in a script is maybe the best one. But if you need to define a default value in a script and override the value from command line, use construct properties {... } in combination with parameter -properties. File psake-build.ps1 would look like this:

#constant that can be changed only from here
$codeDir = 'd:\temp\code'
properties {
  #variable that can be changed from command line; this is just a default value
  $backupDir = 'd:\temp\codebak'

task default -depends Full
task Full -depends Backup, ListFiles, Commit
task Backup {
  Write-Host Backup
  gci $codeDir | ? { !$_.PsIscontainer } | copy-Item -destination $backupDir 

… and you would call Invoke-Psake with parameter -properties:

PS> Invoke-psake -buildFile d:\temp\psake\psake-build.ps1 -properties @{ backupdir = 'd:\temp\otherbackdir' }

Note the type of the value – it is hashtable, not scriptblock. Every item in the hashtable specifies a variable that will be evaluated in the same scope as properties {... } in the psake script (but later).

Sidenote: I didn't tell you whole truth. If you have a line $backupdir = 'some default path' outside of block properties { ... }, even this variable can be changed from command line via Invoke-Psake ... -properties @{backupdir= 'other path'}. Anyway, I wouldn't recommend this approach; the way how parameters are evaluated and working with scope could change in later versions and this script could stop working.

What are Parameters good for

Ok, we saw that there are some -properties. But you might note that psake allows you to specify parameters of function Invoke-Psake via -parameters. Type of -parameters is again hashtable with the same structure as -properties. So a new variable is created from each pair key-value. These variables can be used in function properties { ... } in our build script– that means that we parametrize our script. I hope you will see the difference from the example.

Let's suppose we have psake script like this:

properties { $s = get-service $services }
task default
task stop { $s | stop-service -whatif }
task start { $s | start-service -whatif }

And we pass name or names of services we would like to stop/start:

PS> Invoke-psake -buildFile d:\temp\psake\psake-services.ps1 -task start -parameters @{ services = 'W3SVC' }

What have we done? We stored list of services which names match variable $services and that was done block properties.
Somebody could complain that we get the same effect if we define an initialization task and that will be called as first task before the others. Look at the changed code:

task default
properties { $services = "noservice" }
task init { $s = get-service $services }
task stop -depends init { Write-Host stop service $s; $s | stop-service -whatif }
task start -depends init { Write-Host start service $s; $s | start-service -whatif }

And we would call the script without -parameters:

PS> Invoke-psake -buildFile d:\temp\psake\psake-services2.ps1 -task start -properties @{ services = 'W3SVC' }
Executing task: init
Executing task: start
start service
psake-services2.ps1:Cannot bind argument to parameter 'Name' because it is null.

As you can see the idea is not bad. It just doesn't work in psake. Every task (or better scriptblock that is represented by the task) runs in its own scope and that's why variables are not shared among the tasks. We could of course hack it with scope modificator script:, but I could hardly recommend it, because you change inner state of the module.

Psake for developers

I have been writing about psake only in general terms, but I mentioned almost everything what you will need for some automation tasks.
There is something more for programmers. The best feature is function exec, which terminates the psake script in case that application called inside exec finishes with error. The error is indicated by a return code. Body of exec is very simple, let's look at it with Get-Content function:\exec.

Task that builds solution is almost one liner when you use exec:

$framework = '4.0' 
task Build { 
  exec { msbuild $slnPath '/t:Build' }

You can of course pass more parameters to msbuild, that was only quick example. You have to tell psake which version of .NET framework you want to use and psake ensures that you call the right msbuild exe.

A simple example of build automation for us (developers) could cover clean, build, tests and copying to release directory:

$framework = '4.0'
$sln = 'c:\dev\.....sln'
$outDir = 'c:\dev\...'

task default -depends Rebuild,Test,Out
task Rebuild -depends Clean,Build
task Clean { 
  #exec { msbuild $slnPath '/t:Clean' }
  Write-Host Clean....
task Build { 
  #exec { msbuild $slnPath '/t:Build' }
  Write-Host Build....
task Test { 
  # run nunit console or whatever tool you want
  Write-Host Test....
task out {
  #gci somedir -include *.dll,*.config | copy-item -destination $outDir
  Write-Host Out....

Can this pure hapinness be even more beautiful? Yes – at least for those who use mouse more than keyboard. In some cases, this is much faster then typing on command line. Let's create a GUI for the build automation.

Psake and GUI

We will create GUI in .NET WinForms. Remember, this is just a sample that will show you that it is possible. So, the code will be as concise as possible.
PowerShell has to be run with -STA switch.

Add-type -assembly System.Windows.Forms
Add-type -assembly System.Drawing
if (! (get-module psake)) {
  sl D:\temp\psake\JamesKovacs-psake-b0094de\
  ipmo .\psake.psm1

$form = New-Object System.Windows.Forms.Form
$form.Text = 'Build'
$form.ClientSize = New-Object System.Drawing.Size 70,100

('build',10), ('test',30), ('out', 50) | % { 
  $cb = new-object Windows.Forms.CheckBox
  $cb.Text = $_[0]
  $cb.Size = New-Object System.Drawing.Size 60,20
  $cb.Location = New-Object System.Drawing.Point 10,$_[1]
  Set-Item variable:\cb$($_[0]) -value $cb
$go = New-Object System.Windows.Forms.Button
$go.Text = "Run!"
$go.Size = New-Object System.Drawing.Size 60,20
$go.Location = New-Object System.Drawing.Point 10,70
  if ($cbbuild.Checked) { $script:tasks += 'Rebuild' }
  if ($cbtest.Checked) { $script:tasks += 'Test' }
  if ($cbout.Checked) { $script:tasks += 'Out' }

$script:tasks = @()
$form.ShowDialog() | Out-Null
if ($script:tasks) {
  Invoke-psake -buildFile d:\temp\psake\psake-devbuild.ps1 -task $tasks

Only several lines of code and you have a GUI. I use similar form to create quite complex msbuild/psake configuration and I like it mainly because I don't have to remember all the task names.

The part of code with module import just checks if psake is already imported. If you import psake twice and more, then calling Invoke-Psake finishes with failure. It's problem of psake itself. Generally there should be no problem with modules imported more than once.

Note: you probably saw how I work with $script:tasks outside of event handler. Why don't I call Invoke-Psake from the handler itself? Psake outputs results about the tasks (tables, timings, info about current task) to the pipeline so that you can redirect the output to file. In the event handler the output is processed differently; it is not sent to main pipeline so you don't see the output. The only messages you can see are produced by Write-Host (written of course to console).

Change psake

I'll show you, how you can change module behaviour without changing the file. It is general technique that is not used very often, because it changes the inner environment of the module. Without good knowledge how the module works it could stop working correctly. Anyway, why not to learn something?

Let's say we want to write current user and computer name at the end of psake output. So, you need to edit function Write-TaskTimeSummary. Actually, we don't change it:

PS> $module = Get-Module psake
PS> & $module { ${function:script:Write-TaskTimeSummaryBak} = (gi function:\Write-TaskTimeSummary).ScriptBlock }
PS> & $module { ${function:script:Write-TaskTimeSummary} = {
  . Write-TaskTimeSummaryBak

We created new function Write-TaskTimeSummaryBak as a backup of function Write-TaskTimeSummary. Then we changed definition of Write-TaskTimeSummary so that it calls the backuped function and then it adds user and computer names.

The general pattern how to edit module behaviour is :

& (Get-Module mymodule) { command that you want to execute in module scope }

And we are finished. You can find more info at psake wiki. For example ho to set up function called before/after each task or how to nest psake builds.
I hoped you enjoyed psake as well as I do!

Meta: 2011-03-19, Pepa

I created a repository with some modules / scripts, that could be useful for others (or only for me ;) If you are interested, look at

Currently added modules:

I blogged about them here some time ago, so you can find some more info here at my blog.

Meta: 2010-08-24, Pepa

Usually when I need to select some lines from a text, I use -match operator. It's easy and it fills $matches variable for free when you pass scalar value in.

Today I wanted to use Select-String because of its ability to show me the context (several lines around). So I copied some text from email to clipboard and tried:

[0] Select-String -InputObject ((clip) -split "`r`n") -Pattern 'Runspace' -Context 2
> $PsHome  Evaluates to the full path of the installation directory 
for Windows PowerShell. Sample outputs:  C:\Windows\system32\ WindowsP
owerShell\v1.0\  C:\Windows\SysWOW64\WindowsPowerShell\v1.0\  Note tha
t even if you have PowerShell version 2, the dir path is still “v1.0”,
 because PowerShell 2 is meant to be compatible with version 1. $Host  
 Eval to a object that represents the.....

Garbage! Wow? Where does it come from? Ok, I'll try to pipe it.

[1] (clip) -split "`r`n" | Select-String -Pattern 'Runspace' -Context 2
  CurrentUICulture : en-US
  PrivateData      :
> IsRunspacePushed : False
> Runspace         : System.Management.Automation.Runspaces.LocalRunspace

  If you are running PowerShell in Windows Console or emacs, the name line may be:

Hmm, that's exactly what I expected. So, where is the problem?
I'll make the story short. Parameter -InputObject is of type PsObject. After looking through Reflector to the code, I found this line:

operandString = (string) LanguagePrimitives.ConvertTo(

This code will be executed in method doMatch of class SelectStringCommand. The only possibility how to convert array of strings to one string is to join them (probably via $ofs) and that's exactly what happened.

IMO that's a bug. I would expect it is possible to pass object via pipeline or via parameter and the cmdlet should behave the same. Going to connect to create one.

Bug reported: You can vote if you consider it a bug.

Meta: 2010-07-19, Pepa

Tags: PowerShell bug

Twitter is going to stop supporting basic authentication and will use only OAuth. That's why I tried again how to connect to Twitter via OAuth and found out that there were some changes during last year.

After some minor changes in code OAuth via PowerShell is working again. Check my updated article How to use OAuth to connect to Twitter in PowerShell.

Meta: 2010-07-10, Pepa

Update 2010-07-10: added support for PIN and link for library download, because there is no new release at DevDefined home page.

When I was working on a Microblog reader for Twitter and, I was thinking about using OAuth for authorization purposes. Recently I have started again. Now I can show you full working example from how to register an application up to how to get the data.

Register your application

If you have an application that will request some data from a service and want to use OAuth for authorization, you have to register it first, so that the service knows about the application. In my case service will be Twitter and the application (consumer in OAuth terminology) will be PowerShell.

First go to Twitter and log in using your standard credentials and browse to Settings–>Connections. In the right column go to the Developers section and click on the link that points to On this page you can see all applications you have registered so far. To create new application (probably your first one), click on Register a new application and fill the info.

First part of registration Second part of registration

Registered After you submit the form, you will receive your applications key and secret (consumer key / consumer secret). This two hashes are used by your application when trying to get authorization key from the service (Twitter). Ok, it's time to play with them.

Authorize and request data

You can implement OAuth protocol on your own or you can use existing implementations. First time I tried an implementation by Shannon Whitley. It didn't work as expected (some problems with token expiration). Then I downloaded from Google code / DevDefined, that worked without problems.

Update 2010-07-10: there is no new release available, although the code is still alive. You have to either download the project and compile it or download the library I compiled for you. The old version worked fine, but some time later Twitter introduced PIN. The source code reflects new changes, but no release has been issued so far..

[3] Add-Type -Path C:\OAuthDevDefined\DevDefined.OAuth.dll
[4] $cons = New-Object devdefined.oauth.consumer.oauthconsumercontext

Use the keys provided by Twitter and set the signature method.

[5] $cons.ConsumerKey = '6NoGCtBEDdZGZtKe7JWdw'
[6] $cons.ConsumerSecret = 'lPkVk1PUCdNe7yXrGBI5fGO1UNjyU4rXOUzHt2SdvE'
[7] $cons.SignatureMethod = [devdefined.oauth.framework.signaturemethod]::HmacSha1

Create an OAuth session and request for authorization. PowerShell has to redirect you to Twitter page where you will allow the access for our application.

[8] $session = new-object DevDefined.OAuth.Consumer.OAuthSession `
 $cons,"", `
[9] $rtoken = $session.GetRequestToken()  #unique token just for authorization
[10] $authLink = $session.GetUserAuthorizationUrlForToken($rtoken, 'anything'); $authLink
[11] [diagnostics.process]::start($authLink)  #redirection to Twitter

Prompt to allow access Access granted, PIN issued. A browser window should appear and you are requested to allow the access. After that you will see a PIN that you have to pass as the request parameter. We will request for access token that will identify our powershell client and then we will download last 5 statuses and parse user names from them.

[12] $pin = read-host -prompt 'Enter PIN that you have seen at Twitter page'
[13] $accessToken = $session.ExchangeRequestTokenForAccessToken($rtoken, $pin)
[14] $accessToken | Export-CliXml c:\temp\myTwitterAccessToken.clixml

Now you got your access token that contains keys for later use. The token is stored in a xml file. Currently the Twitter's access token doesn't expire, so you just need to store it and use later.
Ok, "use", but how? It is very similar to what you have seen so far:

[1] Add-Type -Path C:\OAuthDevDefined\DevDefined.OAuth.dll
# create context (provide correct keys)
[2] $cons = New-Object devdefined.oauth.consumer.oauthconsumercontext
[3] $cons.ConsumerKey = '6NoGCtBEDdZGZtKe7JWdw'
[4] $cons.ConsumerSecret = 'lPkVk1PUCdNe7yXrGBI5fGO1UNjyU4rXOUzHt2SdvE'
[5] $cons.SignatureMethod = [devdefined.oauth.framework.signaturemethod]::HmacSha1
# create session; I pass $null and not full urls. It looks weird, maybe 
# there is a more elegant way how to create the session
[6] $session = new-object DevDefined.OAuth.Consumer.OAuthSession $cons, $null, $null, $null
# create access token and fill its data
[7] $accessToken = new-object DevDefined.OAuth.Framework.TokenBase
[8] $at = import-cliXml C:\temp\myTwitterAccessToken.clixml
[9] $accessToken.ConsumerKey, $accessToken.Realm, $accessToken.Token, $accessToken.TokenSecret = `
  $at.ConsumerKey, $at.Realm, $at.Token, $at.TokenSecret
# finally, create request and read response
[10] $req = $session.Request($accessToken)
[11] $req.Context.RequestMethod = 'GET'
[12] $req.Context.RawUri = [Uri]''
[13] $res = [xml][DevDefined.OAuth.Consumer.ConsumerRequestExtensions]::ReadBody($req)
[14] $res.statuses.status | % { $_.user.Name }
Michal Těhník
Jeffery Hicks
Jon Skeet
Martin Hassman
David Grudl


More info

Meta: 2010-07-10, Pepa

Hi Craig,

what can I say about SG2010?

I have been using PowerShell for some time, so I'm quite familiar with the language. That's why I didn't learn much in SG, but it was fun. I liked the tasks, because they were interesting.
Every challenge is great – people are curious how many stars the task will get. Competition is imho better than collaboration (as in SG2009).

In general there are some pros and cons:

  • Pro: SG is good idea
  • Pro: competition over collaboration
  • Pro: randomly select award winners is very pleasant part of SG (I would say that even if I didn't win anything)
  • Con: poshcode problems, that's pretty obvious point
  • Con: I didn't know why my score was low. It would be better to at least leave a comment for me
  • Con: Some scripts got 5 stars even if the scripter wasn't aware about powerShell capabilities (built-in help support, test-path for registry, …). Jaykul pointed to one of them at twitter.
  • Con: You should encourage all participants (or at least people solving advanced scenario) to use PowerShell features. For example why to use remote access to registry via some .NET class when there are cmdlets? I know that it is possible to use .NET (I'm a developer), but imho cmdlets is the standard way.


  • Every user can solve advanced and beginner scenario, but the stars should be counted only from one of them.
  • Previous point means that there should be really 2 winner categories – adv & beginners. It's not fair if advanced people solve adv & beginners and get points from both. Beginners are less motivated then, they can not win.
  • Style points for documentation is wasted time. I saw solutions where doc was very long, maybe half of the script or more. Is it needed? I don't think so. Documentation is boring. It is much more entertaining to make the code better than spent time with documentation.

I hope it doesn't sound like a criticism too much :)
Just wanted to point out some points, because I have read only positive feedback so far.

Thank you for SG2010!

Meta: 2010-06-11, Pepa

Tags: PowerShell

Po dlouhé době jsem zase měl pár minut tvořivé nálady. K postům na blogu jsem přidal možnost tagování, tedy konečně už všechno nespadá jen do kategorie PowerShell.
Zároveň jsem přestal pracovat s kategoriemi a používám už jen pojem tag.

Meta: 2010-05-19, Pepa

Tags: Stránky

This is translation of my article about some PowerShell tips & tricks not only for developers. I will split the article into more parts, so you won't get tired too early ;)

Operators and variables – Variables $$, $^

$$ and $^ are automatic variables that can be useful when using shell interactively. You will need it only rarely, but it could be worth knowing it. What's all that about can be seen in an example:

[0] Get-ChildItem -rec c:\temp\powershelltest\version1
... some files
[1] $^
[2] $$

The variables contain first and last token from the previous row. More accurately: Contains the first token in the last line received by the session and Contains the last token in the last line received by the session.
They can save your time when typing long paths passed to Get-ChildItem, Get-Item etc.

Have a look at StackOverflow that will prove that some people really use it.

Meta: 2010-05-14, Pepa

This is translation of my article about some PowerShell tips & tricks not only for developers. I will split the article into more parts, so you won't get tired too early ;)

Operators and variables – chain of operators

This technique is not used very often, but it can sometimes help you make your code more readable and you can avoid calling some cmdlets.
Just for this demo suppose that our directory contains these files:


We would like to get only those that begin with ch01. After that we want to parse only part of the file name that can be converted to [datetime]. You would probably do it like this:

[0] Get-ChildItem c:\temp\aa\ | 
   select -exp Name | 
   ? { $_ -like 'ch01*'} | 
   % { $_ -replace 'ch01-|\.txt','' } 

Looks pretty familiar, right? However, you can also use other approach.
Note that the code select -exp Name is not necessary in this case because [FileInfo] is converted to string. During this conversion file name is returned.
Update: [Fileinfo] is sometimes converted to full path. It depends on how the object was constructed. However, I consider this behaviour rather buggy..

[1] (Get-ChildItem c:\temp\aa\ | select -exp Name)  `
   -like 'ch01*'  `
   -replace 'ch01-|\.txt',''


There are two things to highlight:
First – operators can be chained. So, operator replace could be split to two for the sake of readability. The code would then look like this: ...-replace 'ch01-','' -replace '\.txt',''. The result of first operator was used when evaluating the second operator. I haven't seen that documented anywhere. Maybe I wasn't searching hard enough. In case you will find some info about it, please let me know.

Second – some operators work with scalars and some with arrays. That's why we could use arrays as left operand for like and replace and the result were correct values.

Lets go even further with our example. Lets convert values that represent dates to datetime.
This is wrong: ... -replace 'ch01-|\.txt','' -as [datetime]. Why? In this case the operator tries to convert input objects (array) to date and that can't succeed. Anyway, we will change the code only a little and it behave correctly.


Get-ChildItem c:\temp\aa\ | 
	select -exp Name | 
	? { $_ -like 'ch01*'} | 
	% { $_ -replace 'ch01-|\.txt','' } |
	% { $_ -as [datetime] } |
	? { $_ -le '2010-03-01' }
(Get-ChildItem c:\temp\aa\) `
	-like 'ch01*' `
	-replace 'ch01-|\.txt','' `
	-as [datetime[]] `
	-le '2010-03-01'

Any disadvantage?
There is only one: you have know that you will use this chaining and begin the command with parenthesis.

Meta: 2010-05-10, Pepa