How to prevent a page from being indexed?
With the increase of resources machines used by the major search engines, they are today capable of indexing new pages very quickly.
Nowadays, Google seems to have the most powerful technology and resources. It is now more difficult to not being indexed than being indexed.
With the help of lots of service used by millions of people (such as Analytics, Chrome...) Google can discover and collect previously unknown URLs in a very efficient and fast way.
It is not enough to ensure that no link points to a URL (called orphan) to be sure that it will never be found in the results page of a search engine.
Using a robots.txt is not the best solution either. Do you know that this doesn't prohibit a search engine to propose the URL discovered in their results?
An engine such as Google will propose the single URL without title or description. There are better solutions to hide the new website or service that you want to release with great fanfare on D-Day, don't you think?
Using noindex meta robots in order to prohibit the indexing is yet more efficient.
The safest solution will be to give a password-protected access or to authorize selected IP addresses.