PDW storage in distributed locations?

Hi,
Is it possible to create PDW storage in a distributed environment, say in different data centers spread across different geographies. From the configuration specs / architecture diagram, it appears that the entire hardware is a co-located unit, but I wanted
to validate my understanding.
Please note that I'm aware of the hub and spoke architecture where we can create distributed data marts being fed by the PDW, but my question is about PDW's own storage.
Thanks in advance!

All the storage within PDW is local to the appliance. PDW is optimized for high performance computing of very large data sets so the local storage design is key to that model. Having said this, PDW does support connectivity to external systems via Polybase
(HDInsight for example with more supported options in AU1) to allow for more of the hub/spoke model.
What is the use case for this? What are you trying to accomplish?

Similar Messages

  • SQL Server 2012 PDW - Controling the physical location of a data partition

    In PDW for distributed tables, is it possible to know/control which partition will reside in which storage node? Assuming I have 4 compute nodes, and I have data with partitions A, B, C and D (all tables will have only 4 partitions), can I design in a manner
    that partition A definitely goes to Compute node 1, partition B to Compute node 2, C to 3 and D to 4?
    Please share your thoughts with me.

    The idea of a MPP-based DW appliance is to hide the complexity from the user. Crucial is the choice of the column for the hash key. Everything else should be taken care by the query engine/optimizer. Can you please provide a little bit more context why
    you want to 'go into the business' of managing single partitions etc.? Thanks

  • Storage Spaces -error Location not available.

    I have had Storage spaces running for a couple of weeks. 4 drives 3tb each, same make/model. Have them pooled into a striped "raid 5" setup thru storage spaces.  I travel and do some shift work. I frequently try to get access to them remotely
    and informed that the drive pool is not accessible.  So when I get home I attempt to access it locally with same issue.
    I see my pool (G:) in my drive list, I click on it no problem,  can see 1st directory, and can go into 1 folder.  If I try to do anything in that folder, go to any other folders or attempt to use any files in the root, it sits and waits and looks
    like it is searching. I hear no drives spinning up and after about 2 minutes I get a pop up that says location not accessible.
    In order to access the information, I need to shut computer down and reboot. Not only is this a bother, but sometimes not available remotely.
    Are the drives going to sleep and not waking up?
    Any help on how to fix this would be greatly appreciated.
    Chris

    Hello Chris,
    Do you mean that you use a RAID5 drive to create a storage space, and it worked for weeks and now has the issue about ‘location not accessible’?
    Please share us the whole error message and a screenshot is preferred.
    Based on my test, I can normally create a storage space with 3 VHD. And it work with no accessible issue.
    Please try to change ownership and permission of the drive and check if the issue still exists.
    For more information about taking ownership, please take a look at the following article.
    https://technet.microsoft.com/en-us/library/cc753659.aspx?f=255&MSPPError=-2147217396
    Best regards,
    Fangzhou CHEN
    Fangzhou CHEN
    TechNet Community Support

  • VoIP system for a large corporation with distributed locations

    Asterisk does all of that for free.  Those are all very basic needs.  I don't know any PBX that doesn't support that.

    Depending on specific needs for secure calls you might not have a lot of choices beyond Cisco and Avaya and NEC.
    Do you need always encrypted RTP and provisioning between endpoints?  Do you need this for some regulatory or government requirement?
    Cisco, Avaya and NEC (ok and Mitel) combined dominate the enterprise market.  So that is a good place to focus your efforts.  If you are open to manufacturers outside of that realm then define your requirements a little more and I am sure you will get plenty of suggestions.  The only manufacturer outside of those three or four that I have experience implementing and maintaining more than 2000 seats would be Zultys.

  • Mulitple storage locations

    Hi,
    In creating outbound delivery, how can I set-up automatic storage location determination if my material exists in multiple storage and picking locations?  I cannot set this up in OVL3 since they have the same plant/shipping point/storage condition.
    Thanks.

    Hi there,
    In Warehouse Management, there is a concept called picking strategies. You can use that if you have multiple picking locations for the same material.
    Regards,
    Sivanand

  • Qmaster Cluster Storage Location Creating Duplicate Clusters

    I'm in the process of setting up a Qmaster cluster for transcoding jobs and also for Final Cut Server. We have an Xserve serving as the cluster controller with a RAID attached via fiber that is serving out an NFS Share over 10GB Ethernet to 8 other Xserves that make up the cluster. The 8 other Xserves all are automounting the NFS Share. The problem we are running into is that we need to change the default "Cluster Storage" location (Qmaster preference pane) to the attached RAID rather than the default location on the system drive. Primarily because the size of transcodes we are doing will fill the system drive and the transcodes will fail if it is left in the default location.
    Everytime we try and set the "Cluster Storage" location to a directory on the RAID and then create a cluster using QAdministrator it generates a duplicate cluster spontaneously and prevents you from being able to modify the cluster you originally made. It says that it's currently in use or being edited by someone else.
    Duplicated Cluster.
    Currently be used by someone else.
    If you close Qadmin and then try and modify the cluster it says it is locked and prompts for a password despite the fact that no password was setup for the cluster. Alternatively if you do setup a password on the cluster it actually does not work in this situation.
    If the "Cluster Storage" location is set back to it's default location none of this duplicated cluster business happens at all. I went and checked and verified that permissions were the same between the directory on the RAID and the default location on the system drive (/var/spool/qmaster). I also cleared out previous entries in the /etc/exports and that didn't resolve anything. Also, everytime any change has been made servives have been reset and started again. The only thing I can see that is different between using the /var/spool/qmaster and another directory on our RAID is that once a controller has been assigned in QAdmin the storage path that shows up is different. The default is nfs://engel.local/private/var/spool/qmaster/4D3BF996-903A30FF/shared and the cusstom is file2nfs://localhost/Volumes/FCServer/clustertemp/869AE873-7B26C2D9/shared. Screenshots are below.
    Default Location
    Custom Location
    Kind of at loss at this point any help would be much appreciated. Thanks.

    Starting from the beginning, did you have a working cluster to begin with or is this a new implementation?
    a Few major housekeepings (assuming this is a new implementation) - Qmaster nodes have the same versions of : Qmaster, Quicktime, and Compressor (if they have compressor loaded).
    The only box that really matters as far as cluster storage location is the controller. It tells the rest of the boxes where to look. On your shared storage, create a folder structure that is "CLUSTER_STORAGE" or something to that affect... then on the controller's preferences pane set the controller's storage to that location. It will create a new folder that is cryptic with numbers and letters and dashes and use that for the storage location for any computer in the cluster.
    Now... What I'm seeing in your first screen shot however worries me a little bit.. I have had the same issue and the only way I've found to remedy that is to pull all things Final Cut Studio off that box and do a completely fresh reinstall... then update everything again to the same versions. I'm assuming you're using FCStudio 7.x?
    We should be able to get you on your feet with this. Configuring is always the hardest part, but when it gets done.. man it's worth it
    Cheers

  • ADCS DB storage options

    I am commissioing a set of subordinate CAs to replace Win2003 servers we have been running for a while. Im starting afresh and I've been reading up on the ADCS clustering which sounds ideal. These CAs will only issue low thousands of certs over their lifetime
    but obviously need to be resilient. The CAs will be deployed to different locations and I was wanting to configure them to use a common DB location implemented using DFS replicated shares. So Ive created shares on my domain controllers and these auto
    replicate site-to-site. The issue I have is that I go through the process of adding the ADCS role and I specify the location of the DB and log directories on these remote shares, however ADCS wont start ("Active Directory Certificate Services did not
    start: Unable to initialize the database connection for xyz"). If I mod the registry and point to lcoal drives on the CA, it starts fine but if I change to remote drives or UNC string to reference the DFS share then same error. I've tried adding tasks
    on startup to establish the remote drive mappings for system and on startup but still no joy. The Microsoft guide "Active Directory Certificate Services (AD CS) Clustering" tells me I need to use shared storage solutions to locate
    the DB before clustering the CAs. Is this the only way to do this? Anyone else managed to cluster ADCS without having the shared storage?
    Thanks, Martin

    Brian is right. When I wrote that paper the only supported (and workable) scenario is to use shared storage. This can cause some complexity in geo-graphically dispersed clusters. But if they have access to a common SAN or shared storage, the picture is
    a bit easier. You can use any transport you want including iSCSI, it just has to be sharable storage.
    Mark B. Cooper, President and Founder of PKI Solutions Inc., former Microsoft Senior Engineer and subject matter expert for Microsoft Active Directory Certificate Services (ADCS). Known as “The PKI Guy” at Microsoft for 10 years.

  • Force file download for IE9 on Azure Blob Storage

    Hello,
    I am trying to use Azure Blob Storage as a location for secure file downloads using Shared Access Signature. Everything is working very well, however the problem I am having is I am trying to allow the user to save files from the browser and I have all browsers
    except IE9 working.
    Reviewing this question,
    What content type to force download of text response?
    this works well when I can control all of the headers, however in Azure Blob Storage, I have set the Content-Type to application/octet-stream and this allows all browsers except IE to ask the user to save the file, IE simply opens the file. It appears that
    known file types will open (example .jpg, .wmv etc…).
    In Azure, I have found no way to set
    Content-Disposition: attachment;filename="My Text File.txt"
    Is there a way, using Azure Blob Storage, to use IE to download any file directly from Azure Blob Storage?
    Thanks in advance.

    Hi,
    Actually, we can't set Content-Disposition for blobs, and I can't think of any other workarounds. From my experience, in most case IE's behavior is fine. I would like to know why you have to prompt a download? The user can see the text file, and
    if they wish to save it locally, they have more than one way to do that (copy paste, save file, etc.). If they simply want to read the text and then forget it, that's also fine. They don't even have to download it and then double click a local file to read
    the content.
    If you have to modify the behavior, the only workaround I can think of is to use a web role as an intermediate bridge, and add the Content-Disposition from your web role.
    Best Regards,
    Ming Xu.
    Please mark the replies as answers if they help or unmark if not.
    If you have any feedback about my replies, please contact
    [email protected]
    Microsoft One Code Framework

  • Fiber chanel storage

    Hello!
    I have HP disk storage which connected to Solaris 9 via FC.
    in system presented 3 devices
    #luxadm probe
    Found Enclosure:
    SUNWGS INT FCBPL   Name:FCloop   Node WWN:508002000027d2c8   Logical Path:/dev/es/ses1
    Found Fibre Channel device(s):
      Node WWN:50001fe1500b3fe0  Device Type:Disk device
        Logical Path:/dev/rdsk/c8t600508B4001076A70000600000F50000d0s2
      Node WWN:50001fe1500b3fe0  Device Type:Disk device
        Logical Path:/dev/rdsk/c8t600508B4001076BE0000700001B70000d0s2
      Node WWN:50001fe1500b3fe0  Device Type:Disk device
        Logical Path:/dev/rdsk/c8t600508B4001076BE0000700001620000d0s2device /dev/rdsk/c8t600508B4001076A70000600000F50000d0s2 and /dev/rdsk/c8t600508B4001076BE0000700001B70000d0s2 have
    4 paths
    #luxadm display /dev/rdsk/c8t600508B4001076A70000600000F50000d0s2| grep Controller|wc -l
    4
    #luxadm display /dev/rdsk/c8t600508B4001076BE0000700001B70000d0s2| grep Controller|wc -l
    4...device c8t600508B4001076BE0000700001620000d0s2 have 1 path...
    luxadm display /dev/rdsk/c8t600508B4001076A70000600000F50000d0s2| grep "Device Address"
        Device Address              50001fe1500b3fe8,1
        Device Address              50001fe1500b3fec,1
        Device Address              50001fe1500b3fe9,1
        Device Address              50001fe1500b3fed,1
    luxadm display /dev/rdsk/c8t600508B4001076BE0000700001B70000d0s2| grep "Device Address"
        Device Address              50001fe1500b3fe8,3
        Device Address              50001fe1500b3fec,3
        Device Address              50001fe1500b3fe9,3
        Device Address              50001fe1500b3fed,3and
    luxadm display /dev/rdsk/c8t600508B4001076BE0000700001620000d0s2| grep "Device Address"
        Device Address              50001fe1500b3fe8,2why
    Device Address              50001fe1500b3fec,2
        Device Address              50001fe1500b3fe9,2
        Device Address              50001fe1500b3fed,2not available for this device?

    Hi ,
    I agree with Gulab Prasad .
    First of all Just run the below mentioned command for the affected database to check the database is in clean shutdown or else in a dirty shutdown state.
    eseutil /mh "path of the edb file "
    If it is in a clean shutdown then you can mount the database .Before that we need to change the storage log file location to different path by using the below mentioned
    command then only new log files will get generated on the new location after the database is mounted.
    Move-StorageGroupPath -Identity "MyStorageGroup" -LogFolderPath "D:\MyNewLogFolder"
    In case if it in the dirty shutdown state then definitely there would be some logs required to mount the database .In case if you have those logs then you can do an
    soft recovery to get back the database to the clean shutdown state and then mount the database.
    In case if you don't have those logs , then as gulab prasad said we do have two options
    1.Hard repair 
    eseutil /p "path of the affected database"
    On the above process there would be some possibilities for the data loss.
    2.Second option would be restore the database file and its respective log files from last successful backup.
    As an additional info to avoid the downtime you can have the dial tone database .Then by using the RSG
    in exchange 2007 we cannot restore the mailboxes data's to the original mailboxes.
    Thanks & Regards S.Nithyanandham

  • Storage Capacity - SNP

    Hi Gurus,
    My client needs to maintain storage capacity in the system, as the warehouse cant hold more then 6000 units in that particular location.
    We are running deployment optimizer & if in case the capacity is more then 6000 units then it has to route to other warehouse.
    I have seen an option in location master as storage capacity, but it is more specific to replishment orders for SNC. If i use this will there be any other side effects in the system.
    Is there any other place where i can maintain the storage capacity, so that if it cross's the limit then it will route to other locations, kindly help me in this regard as it has to be addressed ASAP.
    Regards,
    Arabind

    Hi Arabind,
    One method to do this would be as follows.
    Define a storage resource (category S) in Res01 tcode and define capacity per day in units in the "Definitions" area
    Assign this storage resource  to the location master in Storage Resource field in resources tab
    or define the capacity in M3/pallets etc... in the storage resource and
    Maintian the consumed per unit in Storage Cons.per BUn field in GR?GI tab of product master
    This would restrict the maximum storage in the location.
    Thanks & Regards,
    Ashok

  • Wht is the advantages of mdm

    wht is the exact use for company if the implement mdm .

    Hi Ganishetti,
    Master data is data about your customers, products, suppliers etc.Quality of master data has an impact on transactional and
    analytical data.
    What MDM Does:
    Aggregate master data across SAP and non-SAP systems into a centralized master data repository. Once data is consolidated, you can search for data across linked systems, identify identical or similar objects across systems, and
    provide key mapping for reliable companywide analytics and reporting.
    Consolidate and harmonize master data from heterogeneous systems. Ensure high quality master data by distributing harmonized data that is globally relevant using distribution
    mechanisms. Allow subscribing applications to enrich master data with locally relevant information.
    Support company quality standards by ensuring the central control of master data, including maintenance and storage.
    Distribute centrally created master data to client systems as required using distribution mechanisms.
    System consolidation from R/3, ERP and other sources
    Direct ODBC system access, extract flat files, 3rd party application data, XML sources and many moreu2026
    Single pass data transformation, auto-mapping, validation rules, and exception handling.
    Business Users can define matching rules, complex matching strategies,and conduct data profiling
    Data Enrichment Controller to use 3rd party sources.
    and other partners for address completion, company validation and enriching data.
    Search and compare records, identify sub-attributes for consolidation in sub-second response times
    Merge records seamlessly, tracking source systems with built in key mappings
    Leverage out of box data models for consolidating data
    Consolidation has never been easier Extract, cleanse and consolidate master data
    HARMONIZATION
    Cleanse and distribute across entire landscape.
    CENTRAL MDM
    Create consistent master data from the start centrally.
    Advantages Of MDM
    End-to-end solution. The MDM system provide an end-to-end
    solution that automates the entire process of managing master data from start to finish, including bulk data import, centralized master data management, and published output to a variety of media.
    u2022 Database-driven system. MDM layers a thick shell of functionality on top of a powerful SQL-based DBMS so that the MDM system is fully scalable and the master data is fully accessible to other SQLbased applications and tools.
    u2022 Large capacity. The MDM system efficiently manages master data repositories containing up to millions of records.
    u2022 Superior performance. MDM breaks through SQL performance
    bottlenecks to deliver blazingly fast performance that is measured in milliseconds rather than seconds and is literally 100u20131000 times that of a SQL DBMS alone. No other system on the market today delivers comparable performance.
    u2022 Powerful search and retrieval. All of the MDM modules include
    powerful search and retrieval capabilities, so that an entire repository of thousands or millions of items can be easily searched and any item or group of items located in a matter of seconds.
    you can follow the blog
    /people/karen.comer/blog/2006/12/19/understanding-sap-netweaver-master-data-management-from-an-sap-netweaver-business-intelligence-perspective
    Reward if helpful
    Regards,
    Vinay Yadav

  • Is there a way to use the iCloud framework with keeping my data local?

    Even before the coming to light of the NSA scandal and Apple's involvement in it, I wasn't really comfortable with uploading my data into the iCloud. After all, there's an american saying that "possession is 80% of ownership". Frankly, I don't want to give Apple de facto ownership of my data.
    What I was hence hoping for was to buy a mac mini server edition and maybe change some preferences in order to switch my data storage and syncing location from North Carolina to my cupboard, maybe additionally providing that service to my entire family.
    Some Aplle "genius" told me that wasn't possible, the iCloud-Servers being "hardwired" into the software.
    Is that the case, or is there some way around this, maybe? I already am using Dropbox to sync photos with my devices and some family, but I really would like to have the option to change the server adress of my iCloud installation, as it would make stuff really a lot simpler and more integrated.
    Thanks for any reply and advice.

    In this case - Dear Apple MacOS X engineers. Pretty pleeeeeeeease incorporate such a feature in the next version of OSX (Server). That probably wouldn't go well with your buddies at Fort Meade, but might open a big chunk of the retail market for server products.

  • Got a new laptop and don't have my original Adboe disk.  I do have my serial number for Acrobat 9 standard. How do I download it onto my new laptop, please?

    Hello  Ive gotten a new laptop. I cannot find my Adobe Acrobat 9 disk as it's still packed and in storage from moving residences.
    I have my Adobe Acrobat 9 serial number. How can I download it now for my new laptop, please?
    Thank you. Andrea

    Thank you, Anubha, for your reply. I will search for the disk as soon as
    I'm able to go to another state where my storage unit is located, however
    right now, I have a serial number for my Adobe 9 - it is:
    1016-1934-5048-3315-8945-0131 according to my account on the Adobe website.
    I do have, with me, an Adobe 8 disk - however,  at some point,  I must have
    upgraded to Adobe Acrobat 9 - because that is the serial number in my Adobe
    account.  So, do I have Adobe 9 via an upgrade download link sent to me by
    Adobe?  I believe that is the case, but cannot find the record of that in
    my Adobe account.
    Can you please let me know if you show an upgrade to 9 in my account on
    your end?  If not, can you tell me what an upgrade would cost from Adobe
    Acrobat 8, to Adobe Acrobat 10?
    Thank you....
    Andrea
    On Mon, Mar 2, 2015 at 8:54 PM, Anubha Goel <[email protected]>

  • [865PE/G Neo2 Series] Why do I have a big ol' amount of space reserved for an MFT on my SATA drive?

    The hard drive is on the promise controller and for some reason it has a large amount of space used for the MFT when even my system drive and my other storage drive, both located on the INTEL SATA ports do not have this space set aside.

    how do you figure out that? How large amount is that?

  • 11.2.0.3 grid installation fails while selecting OCFS2 for ocr files

    We are installing 11GR2 (11.2.0.3) cluster on a 64 bit system. we have OCFS2 filesystem for shared devices. version 1.6.3.
    While selecting ocr file locations , we get the following error
    [INS-41321] Invalid oracle cluster register [OCR] Location
    Cause- The installer detects that the storage type of location is not supported for Oracle Cluster registery
    Action - Provide a supported storage location for the Oracle Cluster Registry
    Additional information
    /crp2db01/OCR/ocr_1 is not shared
    However , this mountpoint is shared across both the nodes.
    Note: 11201 grid installation was successful and it accepted the above locations for OCR. however ,we need 11.2.0.3 cluster for 11.2.0.3 database

    As for your current problem, just because Oracle "allows" OCFS2 in a GRID environment, I would never suggest nor implement that. It adds a layer of complexity that is totally unnecessary when a GRID/ASM implementation performs circles around OCFS2. ASM is much easier to manage, maintain, expand and shrink than OCFS2. Especially at version 11.2.0.3. When working at a large telco a few years ago, we had a 300TB+ ASM environment. OCFS2 could not even begin to be that big. ASM will provide you a MUCH more stable environment than OCFS2. And with ASM there is a lot of "magic" that happens with OCR/Voting that makes your life MUCH easier. If you "require" shared application files, then use ASM/ACFS. It is a much better "volume manager" than OCFS2.
    Since you must present devices to the system for OCFS2, you should not have any problems doing the same for ASM. (and don't use ASMLib as it is going away and is not necessary - just make sure you use a partition that skips the first 1M (usually cylinder 1) and you should be good to go!)
    I also would not use a "shared ORACLE_HOME" on either ACFS or OCFS2. The biggest reason is that you lose the ability to do a "rolling" upgrade and when you have a VLC, that becomes much more important that saving a few GB worth of storage.
    I would also pay attention to this:
    http://docs.oracle.com/cd/E11882_01/install.112/e22489/storage.htm#CDEDAHGB
    3.1.4.2 General Storage Considerations for Oracle RAC
    Use the following guidelines when choosing the storage options to use for each file type:
    You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.
    If you plan to install an Oracle RAC home on a shared OCFS2 location, then you must upgrade OCFS2 to at least version 1.4.1, which supports shared writable mmaps.
    Oracle recommends that you choose Oracle ASM as the storage option for database and recovery files.
    For Standard Edition Oracle RAC installations, Oracle ASM is the only supported storage option for database or recovery files.

Maybe you are looking for

  • Deploying a WebApp using JMX

              IHAC who has the following problem while deploying a webapp in 6.1. I am including           her description of the problem.           Thanks in advance.           i'm still playing around with deploying to the 6.1 beta wl server using the

  • Integrate SSRS Native Mode with Sharepoint 2013

    Hi, I have read numerous posts that go back and forth on the question "is SSRS Native Mode supported to work with SharePoint 2013?" and it seems like the answer is no, it is not. Even with SP1 like some people suggested it seems like it is impossible

  • Is it possible to insert a video and have it automatically play when the topic is opened?

    I am using Robohelp 10. Is it possible to insert a video and have it automatically play when the topic is opened?

  • Pass a Workarea (Value) from one Program to another

    Hi Experts, I have a requirement where a Workarea values needs to passed to the Report Program and as well to the Background Program. I am sure EXPORT and IMPORT will work for Foreground but not with the Background. However for each workarea record a

  • Basing Block on Stored Proc....

    each, am basing a block on a cursor in a packaged procedure. my question is i also want to return other details in other 'in out' parameters to be shown on the form. Can this be done.... p_procname (inparam1 in number, outparam1 in out varchar2, refc