Best alternative for central storage of a NAS

Hi All,
I own a macbook pro and an Imac.
Next to that I use a NAS from QNAP to store my data centrally. That should make it easy to use my data on every device that I have.
But in reality the NAS is very slow in relation to my apple products. With a windows machine it's fine. But showing up in Finder is a pain in the ash.
When looking to the competition, also Synology has lots of issues on that part. As can be read on all the user forums.
So, if I want to have an alternatieve solution with the following features:
1) Store all my content centrally and use it on my apple products (so data is only on the central location)
2) Backup the data from my central location to a location that's not in my house (for cases of theft or fire).
3) Where the connection with the apple products is fast.
As far as I know there's not something within the apple product range, but that could be my misunderstanding of the product line.

When I saw 2TB, it triggered these questions:
ISP
Sufficient upload/download transfer caps to achieve your goal
Sufficient bandwidth between you and your cloud services provider to achieve daily backups
Edge device upgrades for higher bandwidth performance to ISP
Cloud Services Provider
Assurance your data remains on servers in your country
Sufficient storage expandibility and fits expense budget
Sufficient cloud performance to achieve 1.2, and subquent document access
Adequate cloud management software interface
Reviews and other documented evidence of cloud service reliability
Personal Document Security
One can place password protected documents in a folder
With Disk Utility, create new disk image from folder
Compression
256-bit AES encryption
Protection against 2.1 disregard

Similar Messages

  • Best solution for media storage

    Hi Everyone,
    I am looking for suggestions/how-to on the best setup for media storage. We have two editors (and two computers - Mac Pro PowerPC and iMac Intel). We are currently using an internal 500 GB slot on the Mac Pro along with 1 additional external 1.5 TB drive. Then, on our iMac we use a 1.5 TB external drive. We use, for external drives, LaCie.
    Here's our problem. We are running out of space. So, what is the best solution for media storage? Should we get a small rack for our external drives from LaCie and daisy chain them? Or, should we get a server? The only problem I have with a server is connection speed...we, most of the time, have a slow connection at our college.
    p.s. here is a link to LaCie's website listing a rack...and we have the d2 drives. So, do you daisy chain them or plug each one in individually?
    http://www.lacie.com/us/products/product.htm?pid=10172
    Any ideas/thoughts/suggestions are greatly appreciated!!
    Thanks in advance.
    zanm
    Message was edited by: zanfardinom

    Hi Everyone!
    Thank you all for your suggestions. I agree...Drobo looks great, but I am finding more and more bad news when it comes to editing. It looks like a great solution for archiving. I received another solution with great reviews from one of my colleagues contacts. They work for a big company.
    Hope this helps...something else to add to the mix.
    *Our edit suite storage solution is a Facilis Terrablock 24EX unit. (http://www.facilis2.com/24ex.html) It has 18TB of space and is connected on 4GB Fibre channel to both suites. On top of that we have an Apple X-Serve running Final Cut Server that sorts and catalogs all the video we put in so network users (Mac and PC) can search and look at media from their workstations. We just put this in a few months ago. Before that we had a Rorke Data Galaxy HDX with 6TB of space on a 2GB Fibre Channel connection. ( http://www.rorke.com/av/galaxy-hdx.cfm ) It’s now a backup storage unit. Both systems have excellent performance though, we upgraded mostly for space and the server software.*
    Again, thank you all for your help. I will research all suggestions.

  • Best Format for Additional Storage

    I've read a few threads on this issue, but none seem to provide the answers I need, so I'm starting a new one.
    I'm trying to make the best decision for additional storage for editing with FCP 4.5. I have a new project for a new customer that will require a significant amount of video storage while I am still doing my current projects (so I can't make room/stop what I'm doing, I'll be doing projects concurrently). This storage solution will just be a capture scratch disc.
    You can see below what I've got, system-wise. My current storage consists of 2 internal drives and 3 external fw800 drives. This setup has worked effectively for a number of years, but I'm now a little nervous (after reading some other posts on this subject) about adding another fw800 drive to the mix (such as the 1.2 TB Lacie Bigger disk).
    Here are my questions:
    1) Is the most reasonable (balancing price and performance) thing to do is add another fw800 drive? I have a PCI card with open fw800 ports.
    2) Are any ethernet solutions viable for working in FCP? If so, which ones? My current router is an old one but upgrading that to a gigabit option is reasonable if this will allow enough throughput for editing video from this storage solution.
    3) Is a SATA card, such as the Sonnet Tempo-X eSATA (http://www.sonnettech.com/product/tempo-x_esata8.html), a viable option?
    I'm gathering data at this point so any input is appreciated. Thanks in advance!

    Yea. The site is www.granitedigital.com if you didn't get that. I would probably go with them for heavy-lifting video jobs or go with Glyph (www.glyph.com) if you've got the means. They've got great flexible rackmount stuff in addition to enclosures. (They are, in the opinion of many, simply the best made external drives on the planet, used in almost every audio recording studio. You get what you pay for.) I would also check out G-technology. They all make great stuff; it's a matter of what fits your budget and needs based on what each company can offer you in terms of flexibility.
    Those notes being said I have had good experience with the LaCie drives I have encountered. I have an external CD-RW drive which is several years old and is still cranking out my CD masters for me without fail. I have a great deal of friends who use the D2 drives as well... they are solidly built (all metal) and use an Oxford chipset (I believe the more dated 922 but it's a decent chip) and like someone else said probably whatever batch of drives they can get cheapest. Inevitably, like any other drive, there are bound to be duds and drives that will fail prematurely. All things considered, in my experience they are workhorses and great "all-around" drives for doing whatever needs to be done in your everyday life.
    hope this helps,
    bret

  • Seeking advice on Best Practices for XML Storage Options - XMLTYPE

    Sparc64
    11.2.0.2
    During OOW12 I tried to attend every xml session I could. There was one where a Mr. Drake was explaining something about not using clob
    as an attribute to storing the xml and that "it will break your application."
    We're moving forward with storing the industry standard invoice in an xmltype column, but Im not concerned that our table definition is not what was advised:
    --i've dummied this down to protect company assets
      CREATE TABLE "INVOICE_DOC"
       (     "INVOICE_ID" NUMBER NOT NULL ENABLE,
         "DOC" "SYS"."XMLTYPE"  NOT NULL ENABLE,
         "VERSION" VARCHAR2(256) NOT NULL ENABLE,
         "STATUS" VARCHAR2(256),
         "STATE" VARCHAR2(256),
         "USER_ID" VARCHAR2(256),
         "APP_ID" VARCHAR2(256),
         "INSERT_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
         "UPDATE_TS" TIMESTAMP (6) WITH LOCAL TIME ZONE,
          CONSTRAINT "FK_####_DOC_INV_ID" FOREIGN KEY ("INVOICE_ID")
                 REFERENCES "INVOICE_LO" ("INVOICE_ID") ENABLE
       ) SEGMENT CREATION IMMEDIATE
    INITRANS 20  
    TABLESPACE "####_####_DATA"
           XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA"  XMLTYPE COLUMN "DOC" STORE AS BASICFILE CLOB  (
      TABLESPACE "####_####_DATA" ENABLE STORAGE IN ROW CHUNK 16384 RETENTION
      NOCACHE LOGGING
      STORAGE(INITIAL 81920 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT))
    XMLSCHEMA "http://mycompanynamehere.com/xdb/Invoice###.xsd" ELEMENT "Invoice" ID #####"
    {code}
    What is a best practice for this type of table?  Yes, we intend on registering the schema against an xsd.
    Any help/advice would be appreciated.
    -abe                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I suggest you read this paper : Oracle XML DB : Choosing the Best XMLType Storage Option for Your Use Case
    It is available on the XML DB home page along with other documents you may be interested in.
    To sum up, the storage method you need depends on the requirement, i.e. how XML data is accessed.
    There was one where a Mr. Drake was explaining something about not using clob as an attribute to storing the xml and that "it will break your application."I think the message Mark Drake wanted to convey is that CLOB storage is now deprecated and shouldn't be used anymore (though still supported for backward compatibility).
    The default XMLType storage starting with version 11.2.0.2 is now Binary XML, a posted-parsed binary format that optimizes both storage size and data access (via XQuery), so you should at least use it instead of the BASICFILE CLOB.
    Schema-based Binary XML is also available, it adds another layer of "awareness" for Oracle to manage instance documents.
    To use this feature, the XML schema must be registered with "options => dbms_xmlschema.REGISTER_BINARYXML".
    The other common approach for schema-based XML is Object-Relational storage.
    BTW... you may want to post here next time, in the dedicated forum : {forum:id=34}
    Mark Drake is one of the regular user, along with Marco Gralike you've probably seen too at OOW.
    Edited by: odie_63 on 18 oct. 2012 21:55

  • Best practice for video storage?

    What is the best way for storing videos on external drives? I am a new FCP user coming from Sony Vegas. In Vegas I could import the .mts files into Vegas without the need of the complete file structure as FCP requires when ingesting to ProRes. Since Vegas worked in this manner, I store the .mts files in folders that are dated when shot. Now I'm faced with finding another program to convert to ProRes as FCP needs to complete file structure to ingest media. Another issue is that I have different types of shoots on one single flash card and I can no longer just dump the .mts files in dated folders. What is the best way of splitting up the flash card, yet keeping the required information FCP needs for ingesting media?
    The solution would be to dump the videos to my external drive as soon as I'm done shooting, but most likely it's something I'll forget to do.
    Anyone have any advice on storing videos that have two plus types footage of different events?

    I have a tutorial that covers this...
    http://library.creativecow.net/ross_shane/tapeless-workflow_fcp-7/1

  • X4500 RAID Configuration for best performance for video storage

    Hello all:
    Our company is working with a local university to deploy IP video security cameras. The university has an X4500 Thumper that they would like to use for the storage of the video archives. The video management software (VMS) will run on an Intel based server with Windows 2003 server as the OS and connect to the Thumper via iSCSI. The VMS manages the permissions, schedules and other features of the cameras and records all video on the local drives until the scheduled archive time. When the archive time occurs, the VMS transfers the video to the Thumper for long term storage. It is our understanding that when using iSCSI and Windows OS there is a 2TB limit for the storage space - so we will divide the pool into several 2TB segments.
    The question is: Given this configuration, what RAID level (0, 1, Z or Z2) will provide the highest level of data protection without comprimising performance to a level that would be noticable? We are not writing the video directly to the Thumper, we are transferring it from the drives of the Windows server to the Thumper, and we need that transfer to be very fast - since the VMS stops recording during the archiving and restarts when complete, creating down time for the cameras.
    Any advice would be appreciated.

    I'd put as many disks as possible into a RAID 5 (striping) set. This will provide the highest level of performance, with the ability to sustain a single disk failure.
    With striping, some data is written to all the disks in the stripe set. So, if you have 10 disks in the set, then instead of writing data to a single disk, which is slow, 1/10th of the data is written to each disk simultaneously, which is very fast. In effect, the more disks you write to, the faster the operation completes.

  • Best practice for centralized material planning in distributed environment

    Hi All- Hope everybody is doing fine. Things are going fine,at my end.
    We are debating on the organization hierarchy decision at our current utilities client. We are proposing that:
    1)     We will have central warehouse as plant/storage location/warehouse.
    2)     Create one plant (it will be virtual only) and 65 storage locations. Plant maintenance orders will be created at the storage location and it will generate the reservation at the plant level. This reservation will be converted into STO during MRP run. Central warehouse plant will use this STO to create the outbound delivery and then the shipping documents. APO will be ok with arrangement as all the requirements are centralized. Now my problem is, the reservation doesn’t have any delivery address to copy it in the STO. To populate the delivery address automatically, I believe we will need the modification / enhancement. Am I right? If this is right then what type of modification we will require? I mean effort wise (hours etc.) & time wise.
    ANy help is highly appreciated.
    best regards,
    KHAN

    Hi,
    If all 65 storage locations are going to be just different areas within the same physical site (address) then that should be OK, but if any of those storage locations have a different site address to the others then I would suggest that they should be Plants and NOT storage locations.
    The reason you are having problems with addresses is that in a Plant to Plant reservation the plants have different addresses.
    PLEASE don't use storag locations for different sites, if you DO then this will only be the first of MANY modifications that you will have to do.
    Yes there will be more master data if they are all set as plants, but how many times does master data change compared to how many receipts, issues and transfers etc. will occur. I would rather compromise on the master data maintenance than have to compromise on EVERY movement.
    I can't stress this strongly enough, if you use storage locations when you should be using plants you WILL lose major functionality and the SAP implementation will be seen as a poor one.
    I have seen many implementations at first hand where this has been the one influencing factor that has resulted in a poor implementation instead of a good one.
    Steve B

  • Best option for file storage

    I want to archive some of my CAD files and other files on a 'by year' basis. Is a flash drive a good option? They seem neat but then again they are a bit small & could get lost. Any ideas on the best option these days?

    A 2.5" external hard drive will hold MUCH more and still fit in a shirt pocket should you choose. Lots of storage for a reasonable price. Remember to make backups of whatever meda you choose. You don't want your files to be in just one place.

  • Best practices for additonal storage devices in a clustered container

    I am wondering what is considered as best practices related to adding more storage devices. I have a Solaris Cluster 3.2, a failover resource group with a SZBT resource and a HASP resource for the zone root path. Now i have to add more storage devices (for Oracle to the zone). The additional storage devices are in the same metaset as the zone root. I see the following options:
    -> Add it to the zone (simple not managed by HASP)
    -> Add it to the existing HASP for the RG and lofs mount it in the Zone (managed by HASP, but needs lofs mount)
    Are there other options, and what is considered best practices?
    Fritz

    Hi Fritz,
    no matter which option you take it should resoult in lofs mounts.
    I in person would not mix the concepts, you started with HASP, so I would continue with HASP.
    To achieve this you have to take three steps:
    1 add the file systems to the HASP resource,
    2. create the mountpoints in the local zone
    3 add the file system to the Mounts variable in the SCZBT's parameter file.
    If you do not want to reboot the zones, you can mount the loopback files manually, but the safest option ( you will not mess up with typos in th mount pints and realize it later) would be to disable and reenable th sczbt resource.
    Cheers
    Detlef

  • Best option for MBA storage capacity

    Hi all,
    Running around a number of options on which config of storage capacity to go for with a new MBA. Money is a bit tight so for that reason I'd prefer to go for the 128Gb SSD version but I know that will not be enough storage capacity for me.
    Therefore I'm mulling several solutions: USB storage, SDHC storage (on 13" model) or an external HDD via USB. Best VFM is obv an external hdd, but access times/running speeds (excuse the layman terms!) are ultimately my priority.
    I don't know much about what it's really like running extra storage permanently like I plan to - I assume it won't be as fast or glitch-free as internal storage would be, but I have no idea whether it'll be acceptable or awful. Example of use are prob running music/video files off it.
    I guess my question to you guys is, what kind of access expectations can I expect with the above options?
    Or, should I just take a deep breath and jump for the 256Gb version? Thanks in advance

    Right - yes, I've just been researching into SD Classes. Had no idea before!
    My ultimate aim is to have some extra storage that performs well enough at a low cost. Typical activity will probably be playing music and watching movies from the external media - what I'm worried about is poor data transfer speeds affecting playback etc. Can you advise on, for example, at what speed this would be a problem, and thus at what speeds I'd need? (sorry for all the questions!)
    If these problems are unlikely to happen with an SDHC, I would be tempted to go for it purely on the aesthetic grounds.
    Thanks again,

  • Best alternative for rowversion

    We're currently evaluating SQL Server 2014's In-Memory OLTP for our application and saw in the documentation that rowversion is not a supported datatype: http://msdn.microsoft.com/en-us/library/dn133179(v=sql.120).aspx
    What's regarded as best practice if we basically want to have the same functionality as rowversion in In-Memory OLTP? Using triggers would be a massive performance decrease for us compared to rowversion.

    rowversion isn't a hash,  it's a database-wide durably-monotonically-increasing 8-byte binary value.  So the best way to get similar behavior is to implement your own database-wide durably-monotonically-increasing 8-byte value.
    You can do this with a SEQUENCE object. eg
    create sequence ts_seq as bigint start with 10000000
    This is the same way you generate key values for memory tables, with the difference that this sequence is used to generate timestamp values for multiple tables, instead of key values for a single table.  You can then cast the value to binary(8) or store
    it as bigint, as the sort orders are compatible, ie
    for bigint a,b   a<b iff cast(a as bigint) < cast(b as bigint)
    You can't access the sequence in a native proc, so you'll have to pass it in.  Eg here's how to fill up a memory table variable with sequence values (remember when NEXT VALUE FOR appears in a SELECT you get a new sequence value for each row of the output):
    DECLARE @od Sales.SalesOrderDetailType_inmem
    SET @DueDate = DATEADD(d, (@i % 10) + 2, @now)
    DELETE FROM @od
    INSERT @od
    SELECT OrderQty, ProductID, SpecialOfferID, LocalID, (next value for ts_seq)
    FROM Demo.DemoSalesOrderDetailSeed
    WHERE OrderID = @i % @max_OrderID
    Of course you must remember to get a new "timestamp" in any operation that inserts or updates the table.
    David
    David http://blogs.msdn.com/b/dbrowne/

  • Best alternative for locating lost library & playlists?

    I keep my iTunes audio files on an external hard drive.  I'm not sure if iTunes stores the information (playlists, etc.) on my external or internal hard drive.  I accidently disconnected my hard drive while iTunes was open.  I quickly quit iTunes, reconnected the hard drive, then opened iTunes again - only to find everything gone.  I'm assuming I need to relocate my Library after holding down the OPTION key - but the .xml files are not highlighted - and then I read something about locating the .itl files .... but when I open the .itl file - it gives me a library from a few months ago - not the end of the world - but none of the new music I added to the library since then shows up.
    I have also tried dragging the .xml file directly into iTunes, but I'm still getting the library from two months ago.
    What would be my best bet?  Is there somewhere on either hard drive so I can make a full recovery?  I also have a backup of my computer on Time Machine - but that hard drive is in another location.  Would restoring my system to two days ago via Time Machine be my best best to get everything back?  If so .... maybe I can wait!  In the meantime, any help you have would be GREAT.
    Thank you!!!!!

    Double-clicking on an .itl file opens iTunes, but not necessarily the file that was clicked on. To check the contents of an .itl database you need to connect to it after holding down Option(Mac) or Shift(Win) as you click the icon to start iTunes and are prompted to choose or create a new library.
    If you know where the active library file is then this is the easiest way to restore a recent backup.
    Empty/corrupt library after upgrade/crash
    Hopefully it's not been too long since you last upgraded iTunes, in fact if you get an empty/incomplete library immediately after upgrading then with the following steps you shouldn't lose a thing or need to do any further housekeeping. In the Previous iTunes Libraries folder should be a number of dated iTunes Library files. Take the most recent of these and copy it into the iTunes folder. Rename iTunes Library.itl as iTunes Library (Corrupt).itl and then rename the restored file as iTunes Library.itl. Start iTunes. Should all be good, bar any recent additions to or deletions from your library.
    See MusicFolder Files Not Added and Super Remove Dead Tracks for tools to catch up with any changes since the backup file was created.
    tt2

  • Best solution for usb storage mounting ?

    Hey there
      i installed arch on my wifes computer, keeping it xfce4 for right now to  conserve resources. my daughter has a usb camera that i can mount via fstab (sometimes) and mount -t vfat /dev/sda1 yadda yadda.
    is there a way i can simplify this so that it automounts ala ubuntu style ?
    i have looked at susbmount and automount . what would be the best way to go here?

    nephish wrote:cool, does sda1 stay at a permission that will allow users to mount / unmount ?
    With ivman? Yes, I believe so, although I would try to stay away from it. In the past I've experienced corruption problems. I'm sure its fixed now but personally, I feel safer not using it all.
    I run dbus and hal (added in that order to daemons() array in rc.conf) to dynamically sync the fstab with the proper entries once something is plugged in and xfce4-mount-plugin to manually mount the device. It sits in the task bar and you can mount/umount each device by clicking on it and then clicking on the corresponding device icon. Its in aur.

  • Best Practices for data storage in portal database

    Hi,
    I need to store some stuctured data which is related to portal only. This data may grow data by data and may huge amount some point of time.
    Iam thinking which is the best way to handle it like maintanance point of view.
    i think i can store it in R/3 as custom table which is easy to maintain and read/write using RFC.
    the other one is store it on the portal database (dictionary), may not be easy to handle it.
    suggestions please?
    Thanks.

    Best way is to maintian data(growing data) on R3 and use  JCA to write a simple portal application(JSPDynpage or WD) to get the data back on portal.
    Using JCA, won't affect noticeable portal performance..
    Itz not advisable to store large data on dictionary.
    Regards,
    N.

  • Dealing with old AVI files - what is best format for 'master' storage.

    I kept a lot of original AVI footage from the original DV Tapes. I see these as the MASTER material.
    I also have the same footage on MPG.
    My new Smart TV will link to ny network and can see the AVI files but not play them.
    It seems i need to convert them to a Divx format of AVI in order to play these files...
    Question is therefore 2 fold.
    1 - I want store/keep and have as 'masters' the highest quality of the orginal recordings. So is the best 'original',  the orignal DV Tape AVI files or would editing these via CS6 and outputing an MPG or MP4 format be as good as the 'original'.
    Despite 15 years messing about with digital editing, i remain baffled by what constitutes 'ORIGINAL FORMAT'.
    2 - How can I convert the AVI files from current set up to a Divx Codec that my SMART TV will recognise and play.  WIll CS6 do this or Encoder ?  I cant see an option.
    I've tried all AVI outputs and the SmartTV cannot read any of them.
    Regards
    Kev.

    About 4 years ago I transferred all my old 8/Hi8/DV tapes onto my PC exactly the same as you have done.
    The AVI is a standard introduced by Microsoft http://en.wikipedia.org/wiki/Audio_Video_Interleave gives the detail and history
    Smart TV's in 2012 could play AVI's or at least the youtube demo I just watched did so you may want to check the manual to find out precisely what your TV will play
    Keep the originals as they are, do nothing to them.
    Editing and output to whatever the TV can play, MPG and MP4 should be OK and as future proof as possible.
    Try exporting a short say 1 minute of mixed footage to MPG and MP4 and view these, I did just this with my network and Sony TV and they streamed great, then I edited all that old footage to MPG and MP4 (I saved the same movie to both formats just in case).
    Do remember that your AVI's will be SD and these will be upscaled by your TV to fill the HD screen
    Divx is great but quite lossy and there has been a myrad of types over the years so future proofing may be an issue.
    I cannot advise on CS6 as I use CS5.5.
    Finally before you ask.......edit SD AVI's export movies to DVD's and playing back via a good quality upscaling Bluray player will probably give the best HD results, far better than thinking about software upscaling your AVI's to HD
    Col

Maybe you are looking for

  • Problem in using View Object for validation

    Hi, I have defined a simple swing form to practice. It has one Entity Object and two View objects. One of these view objects is an updatable VO based on the EO and the other one has a simple "select count(*)" from a table. I defined a method validati

  • Universal export to Excel process

    Hello, I must write pl/sql procedure that exports data to Excel file in XML format. This procedure should be available for all pages in quite big application, so I don't want to write it by scratch for each report separately. I need more object-orien

  • Why are my RAW files created by Vuescan so dark in Aperture?

    I've been scanning images using Vuescan and saving as 16/48 bit RAW images. These images always look great in Vuescan but when I import these images into Aperture, the images are very dark. I need to do an exposure correction of +2 just to start seei

  • Adding componets to oracle home after patching

    I installed 10.1.0.2 RDBMS enterprise edition and applied the 10.1.0.3 patch and the Jan05 critical update patch. When I wanted to use the dB created with this home as OEM Grid control repository, I found that it did not have LABEL Security. My quest

  • Create popup - not an external window

    Hi guys. how can I create a popup with webdypro for java? With createExternalWindow() (which is deprecated by the way, is there a new way to do that differently now? ) I can create a new window. but I just need a regular URL-popup. regards milad