CreateSchemaBasedXML using multiple schemas

I am trying to perform XSD validation against an XML document within PLSQL and I'm having an issue with getting it working. 
I have created an XMLTYPE object, which when I call isSchemaBased() against it returns 0 (false). Now obviously the XMLTYPE needs to be schema based in order to be validated, I found that you can make a schema based version by calling createSchemaBasedXML against the XMLTYPE. The problem I am having is that my schema is broken into two parts (combining the schema files is not an option unfortunately), which means that when I try and createSchemaBasedXML specifying the main schema it fails because it is unable to resolve a reference which is imported from the second XSD document.
-- lxml is the XMLTYPE which has been populated with the XML before this point
dbms_xmlschema.registerSchema(
  schemaURL => mainSchemaURL,
  schemaDoc => mainSchemaDoc,
  local => true,
  genTypes => true,
  genTables => false,
  force => true,
  enableHierarchy => dbms_xmlschema.ENABLE_HIERARCHY_NONE);
dbms_xmlschema.registerSchema(
  schemaURL => importedSchemaURL,
  schemaDoc => importedSchemaDoc,
  local => true,
  genTypes => true,
  genTables => false,
  force => true,
  enableHierarchy => dbms_xmlschema.ENABLE_HIERARCHY_NONE);
if lxml.isSchemaBased() = 1 then
  dbms_output.put_line('Schema based');
else
  dbms_output.put_line('Non-schema based');
end if;
dbms_ouput.put_line('About to apply schema');
lxml := lxml.createSchemaBasedXML(mainSchemaURL);
dbms_ouput.put_line('We don't get this far');
lxml := lxml.createSchemaBasedXML(importedSchemaURL); 
Is there any way of being able to import both the mainSchemaURL and the importedSchemaURL at the same time so that the imported schema references within the main schema don't cause the failure;
ORA-31079: unable to result reference to type [type containing imported type]
Any help or pointers would be greatly appreciated.

the code now gets to where it is trying to register the mainSchema but fails with an exception "invalid XML document" (which is untrue as the XSD is valid).
This was probably not the complete error message.
When I run your example I get :
ORA-31154: invalid XML document
ORA-19202: Error occurred in XML processing
LSX-00023: unknown namespace URI "App1/ImportedNamespace"
ORA-06512: at "XDB.DBMS_XMLSCHEMA_INT", line 3
ORA-06512: at "XDB.DBMS_XMLSCHEMA", line 14
ORA-06512: at line 4
Oracle is not able to resolve the definition of the imported type.
I know the specs don't make it mandatory but we also need to specify the schemaLocation in the import directive :
<xs:import namespace="App1/ImportedNamespace" schemaLocation="importedSchemaURL.xsd"/>
Complete test case :
-- Register the imported schema
begin
  dbms_xmlschema.registerSchema(
    schemaURL => 'importedSchemaURL.xsd',
    schemaDoc => '<?xml version="1.0" encoding="utf-8"?>
  <xs:schema id="App1_ImportedNamespace_v0_1_1"
      targetNamespace="App1/ImportedNamespace"
      xmlns="App1/ImportedNamespace"
      elementFormDefault="qualified"
      version="0.1.1"
      xmlns:xs="http://www.w3.org/2001/XMLSchema">
      <xs:simpleType name="ImportedType">
        <xs:restriction base="xs:string">
          <xs:minLength value="2" />
        </xs:restriction>
      </xs:simpleType>
  </xs:schema>',
    local => true,
    genTypes => false,
    genTables => false,
    enableHierarchy => dbms_xmlschema.ENABLE_HIERARCHY_NONE
end;
-- Register the main schema
begin
  dbms_xmlschema.registerSchema(
    schemaURL => 'mainSchemaURL.xsd',
    schemaDoc => '<?xml version="1.0" encoding="utf-8"?>
  <xs:schema id="App1_MainNamespace_v0_1_1"
      targetNamespace="App1/MainNamespace"
      xmlns="App1/MainNamespace"
      elementFormDefault="qualified"
      version="0.1.1"
      xmlns:xs="http://www.w3.org/2001/XMLSchema"
      xmlns:Imported="App1/ImportedNamespace">
      <xs:import namespace="App1/ImportedNamespace" schemaLocation="importedSchemaURL.xsd"/>
      <xs:complexType name="mainComplexType">
        <xs:sequence>
          <xs:element name="JustAString" type="xs:string" />
          <xs:element name="ImportedString" type="Imported:ImportedType" />
        </xs:sequence>
      </xs:complexType>
      <xs:element name="root" type="mainComplexType"/>
     </xs:schema>',
    local => true,
    genTypes => false,
    genTables => false,
    enableHierarchy => dbms_xmlschema.ENABLE_HIERARCHY_NONE
end;
Besides the schemaLocation attribute, I've just added a root element in the main schema, and registered both with genTypes = false (use that option if you don't intend to use object-relational storage).
Validation test :
SQL> declare
  2
  3    doc xmltype := xmltype(
  4  '<root xmlns="App1/MainNamespace"
  5         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  6         xsi:schemaLocation="App1/MainNamespace mainSchemaURL.xsd">
  7    <JustAString>ABC</JustAString>
  8    <ImportedString>X</ImportedString>
  9  </root>');
10
11  begin
12
13   doc.schemaValidate();
14
15  end;
16  /
declare
ERROR at line 1:
ORA-31154: invalid XML document
ORA-19202: Error occurred in XML processing
LSX-00221: "X" is too short (minimum length is 2)
ORA-06512: at "SYS.XMLTYPE", line 354
ORA-06512: at line 13
Seems to work

Similar Messages

  • Using Multiple Schemas in one Application

    Is it possible to have multiple schemas from the same database in one applicaiton. In APEX we have added multiple schemas to our workspace but when we create a form or report only one schema seems to be available.

    Miroslav wrote:
    Your application has a parsing schema. All other schemas have to give appropriate grants to it to be available to your application.Miroslay is right, if you use the INTERNAL workspace to add additional schemas to a workspace, APEX does not take care of any grants between the schemas. APEX can not know how or if you might want the schemas to interact with one another. All this does is gives the workspace the ability to utilise that schema, such as adding it as parsing schema to any applications in the workspace.
    You can switch to the secondary schema in SQL Workshop and add grants from there, e.g.
    grant select on SECONDARY_SCHEMA.MY_TABLE to PARSING_SCHEMA;

  • Database Design - multiple schemas

    Hi!
    We're currently designing a DB for an AUTHENTICATION SYSTEM where several users from different companies (around 40) will have to be authenticated -- connected to ORACLE. Authentication and faster recovery is important.
    Recovery/Backup
    An issue raise where what if the schema encountered a problem then of course you have to backup the entire data. So we are considering to use multiple schemas.
    One Company = One Schema
    So if one schema is down then other schemas will not be affected and faster to recover.
    Actualy, we're quite hesistant to use multiple schemas because of maintainability -- Managing different schemas and too much burder for our developers.
    Will the idea of having multiple schemas be advantageous to what we want to achieve?
    Is this a good design or any other idea to handle this kind of situation?
    Can Partitioning do the same?
    Thanks a lot

    Advantages of multiple schemas:
    - each schema is entirely separate
    - you can maintain at different times/dates for different companies
    - different schemas could be on different databases / servers
    Disadvantages
    - any 'shared' data may have to be duplicated (but you can always use a shared schema for reference data)
    - yes, you have to maintain each schema separately (but that would be by scripts, and at least they'd be well tested!)
    The dictionary (SYS tables) will be somewhat larger (40 copies of table, index, pl/sql definitions)
    - you'll have 40 identical sets of SQL cached; they all look the same, but relate to different schemas. So you need a bigger SGA.
    Can Partitioning do the same?No - partitioning is a solution to a physical problem, not a security problem
    Is this a good design or any other idea to handle this kind of situation?I think either way works - it depends on size, number of users, are you using a third tier, etc.
    Or, with a single schema, you can use VPD - virtual private database (otherwise known as FGAC - fine grained access control or RLS - row level security).
    See eg http://builder.com.com/5100-6388_14-5062064.html and also Ask Tom http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:70287097313911 which refers to the documents.
    You can also implement a kind of VPD on the cheap by using user defined namespaces and the SYS_CONTEXT function, combined with application logic and clever view definitions.
    HTH
    Regards Nigel

  • Multiple schemas

    A project here has the following requirement.
    Want to use multiple schemas in the same database instance with identical structure. That is, from one login we want to determine the schema name to use at runtime and access it using the same mapping descriptors.
    Is this possible with TopLink? Thanks.
    Haiwei

    Hello James
    Thanks for the information. However the setting does not seem to make any difference.
    For ex: I was trying to fo the following.
    The Serversession is created with a connection pool with the Database user TESTUSER. Then I get a clientSession and set the Tablequalifier on this.
    ClientSession clientSession = getServerSession().acquireClientSession();
    clientSession.getLogin().setTableQualifier("SCOTT");
    unitOfWork = clientSession.acquireUnitOfWork();
    Then I expected the SQL statements like
    SELECT empname FROM SCOTT.emp;
    but it is not happening and the queries are generated with out any qulifier to the table name.
    Am I missing any thing? I am using Oracle TopLink - 10g Release 3 (10.1.3.3.0)
    Thanks and Regards
    Potu

  • How to use one application with multiple schema without copying application?

    Hi,
    Previously we are using oracle forms and by that we can manage by using a set of folders containing fmx and use different schema/database for different customers. so the source code comes from one individual file but used for different database users.
    is it possible to do this without copying application in apex?
    reason is because if applications are copied for each customer, and in a situation where a page has a bug, the developer must correct multiple pages across all the application. This would not be appropriate to manage.
    could this be done in apex? or is there any other approach?

    Hi,
    An application is tied to its parsing schema, so it is not possible to have one code base which you can then point to different schemas. I have seen some threads relating to dynamically setting the parsing schema, but I don't think it has worked to well, and would not be a supported configuration by Oracle.
    The normal way to do this is to have one schema and for each entity where it is logical you will have an extra key which is the customer id. I mention where it is logical, because not every entity needs its own data defined by customer. Some data will be common across all customers, such as lookup data and some entities will comprise child entities by which the data separation will be implied by the parent. You can then use Oracle's Virtual Private Database feature to implement a seperate view of the data through the application, based most likely on the customer who is logged on.
    Hope this helps.
    Regards
    Andre

  • Multiple schema used on apex?

    hi all,
    i posted a thread here need some comments or suggestions for database designing
    just wanted to ask comments from APEX developers as well. is it recommended to have more than one schema if i use Oracle Application Express as front-end tool?
    considering that having multiple schema means multiple workspace. then there will be features that i won't be able to use such as a single sign on for all applications in this case?
    what i want to know is are there any other ways to have a user sign in to one application and be authenticated in all applications despite that those applications are on different workspaces?
    thanks
    allen

    Allen,
    One Workspace doesn't mean one schema. You can have multiple schemas assigned
    to one workspace.
    Denes Kubicek
    http://deneskubicek.blogspot.com/
    http://www.opal-consulting.de/training
    http://apex.oracle.com/pls/otn/f?p=31517:1
    -------------------------------------------------------------------

  • Multiple Schemas under one user account with XE 10g

    Hi,
    I am using (learning) XE 10g. I would like to know if it is possible to have multiple schemas under one user account and have the schemas logically separated. As of right now, I have three schemas that I am working with, each one under a different user account. This is inconvenient, because I have to logout of one user account and login to another user account simply to be able to work with another schema.
    Thanks

    It isn't possible to have multiple schemas under one database user account. It is of course possible to grant rights to other database users, and or roles, in order to allow access to the tables/data from other accounts. In Oracle there is a one-one mapping between schema and user.
    Niall Litchfield
    http://www.orawin.info/

  • [Forum FAQ] How to use multiple field terminators in BULK INSERT or BCP command line

    Introduction
    Some people want to know if we can have multiple field terminators in BULK INSERT or BCP commands, and how to implement multiple field terminators in BULK INSERT or BCP commands.
    Solution
    For character data fields, optional terminating characters allow you to mark the end of each field in a data file with a field terminator, as well as the end of each row with a row terminator. If a terminator character occurs within the data, it is interpreted
    as a terminator, not as data, and the data after that character is interpreted and belongs to the next field or record. I have done a test, if you use BULK INSERT or BCP commands and set the multiple field terminators, you can refer to the following command.
    In Windows command line,
    bcp <Databasename.schema.tablename> out “<path>” –c –t –r –T
    For example, you can export data from the Department table with bcp command and use the comma and colon (,:) as one field terminator.
    bcp AdventureWorks.HumanResources.Department out C:\myDepartment.txt -c -t ,: -r \n –T
    The txt file as follows:
    However, if you want to bcp by using multiple field terminators the same as the following command, which will still use the last terminator defined by default.
    bcp AdventureWorks.HumanResources.Department in C:\myDepartment.txt -c -t , -r \n -t: –T
    The txt file as follows:
    When multiple field terminators means multiple fields, you use the below comma separated format,
    column1,,column2,,,column3
    In this occasion, you only separate 3 fields (column1, column2 and column3). In fact, after testing, there will be 6 fields here. That is the significance of a field terminator (comma in this case).
    Meanwhile, using BULK INSERT to import the data of the data file into the SQL table, if you specify terminator for BULK import, you can only set multiple characters as one terminator in the BULK INSERT statement.
    USE <testdatabase>;
    GO
    BULK INSERT <your table> FROM ‘<Path>’
     WITH (
    DATAFILETYPE = ' char/native/ widechar /widenative',
     FIELDTERMINATOR = ' field_terminator',
    For example, using BULK INSERT to import the data of C:\myDepartment.txt data file into the DepartmentTest table, the field terminator (,:) must be declared in the statement.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,:’,
    The new table contains like as follows:  
    We could not declare multiple field terminators (, and :) in the Query statement,  as the following format, a duplicate error will occur.
    In SQL Server Management Studio Query Editor:
    BULK INSERT AdventureWorks.HumanResources.DepartmentTest FROM ‘C:\myDepartment.txt’
     WITH (
    DATAFILETYPE = ‘char',
    FIELDTERMINATOR = ‘,’,
    FIELDTERMINATOR = ‘:’
    However, if you want to use a data file with fewer or more fields, we can implement via setting extra field length to 0 for fewer fields or omitting or skipping more fields during the bulk copy procedure.  
    More Information
    For more information about filed terminators, you can review the following article.
    http://technet.microsoft.com/en-us/library/aa196735(v=sql.80).aspx
    http://social.technet.microsoft.com/Forums/en-US/d2fa4b1e-3bd4-4379-bc30-389202a99ae2/multiple-field-terminators-in-bulk-insert-or-bcp?forum=sqlgetsta
    http://technet.microsoft.com/en-us/library/ms191485.aspx
    http://technet.microsoft.com/en-us/library/aa173858(v=sql.80).aspx
    http://technet.microsoft.com/en-us/library/aa173842(v=sql.80).aspx
    Applies to
    SQL Server 2012
    SQL Server 2008R2
    SQL Server 2005
    SQL Server 2000
    Please click to vote if the post helps you. This can be beneficial to other community members reading the thread.

    Thanks,
    Is this a supported scenario, or does it use unsupported features?
    For example, can we call exec [ReportServer].dbo.AddEvent @EventType='TimedSubscription', @EventData='b64ce7ec-d598-45cd-bbc2-ea202e0c129d'
    in a supported way?
    Thanks! Josh

  • One application with Multiple schemas- common application frame work

    Hi All,
    I am trying setup a common application frame work in apex. Please help me.
    How to achieve this.
    Creation of one application attached to different schemas at run time. So that my application maintaince is going to be easy instated of creating copies of same application.
    More details:
    1. I have one application with 100 pages pointing to a schema dev_common in one workspace APP_COMMON. I have 50 schemas with same structure of dev_common schema with different set of data ( because of large amount of data).
    So I want to create one application attached to different schemas.
    2. And another thing is I have 100 users, the user can work on 1 or multiple schemas ( I mean same application with different schemas attached)
    Any help much appreciated.
    Thanks,

    Thank you for the reply.
    >> b) I think you have to give access rights for the dev_common and app_common to all users.
    Dev_common schema is a kind of placeholder. I have 50 schemas same as dev_common because of different business requirements but the front end is same for all 50 schemas. How can we create one application used for 50 schemas instead of creating 50 applications and 50 workspaces.
    Please help me.

  • Using multiple timer in the same SessionBean

    Hello,
    Is it possible to use multiple timer in the same Stateless Session Bean. In my application a user can schedule some task to execute. To do so I was thinking of creating a Session Bean which would create calendar timer on user request and, when one of the timer expires, retrieve the task to execute thanks to the information stored in the timer.
    When I tried the solution explained above, it seems that the @timeout method is synchronized on 1 timer. For example if I create a timer that will be executed every 10 seconds and another one executed every 30 seconds, the timeout callback is called every 30 seconds but 4 times.
    My code looks like that :
    @Stateless
    @LocalBean
    public class TimeManager {
        public void onUserRequest(ScheduleExpression expression) {
            Timer timer = timerService.createCalendarTimer(expression, timerCfg);
        @Timeout
        void timeout(Timer timer) {
            logger.log(Level.INFO, (String) timer.getInfo());
    }Is there a way to do what I want?
    Thank you

    This doesn't make any sense to me. If i were to write a bunch of schemas for a particular applicaion, would
    I have them all in the same namespace? You would normally have one schema that describes the information model associated with the one namespace that the different documents in your application use.
    If so, why can't I load more than one in the same name space?The parser chooses the schema based on the namespace alone. There is no other information used to decide which schema to use, so you can only have one schema for the namespace.
    They all have different root elements.You can have different root elements in the one schema.
    I don't even need the namespaces, but I can't
    figure out how to get rid of them (the schema isn't valid with out them according to XML Spy). You can set the schema explicitily before parsing, but not (AFAIK) set after parsing has begun, except by using the mapping of namespace to schema location.
    I have also tried to use the external-noNameSpaceSchemaLocationproperty, but it doesn't seem like you can pass in an array of schemas to that one. It only expects a
    String as the Object you pass in to setProperty. Yes, you can only validate a document against the one schema.
    So, how can I load all my schemas so I don't have to reference them in the XML documents? Either combine your schemas so you have one schema for your namespace that validates elements which are defined in that namespace (the formal/correct way of doing it), or construct a filter that inserts a PI to point to the schema once the root element is opened (the pragmatic/bit of a hack way of doing it).
    Pete

  • How to read a file with data in Hierarchical Structure using XSD Schema

    Hi
    We have requirement in which we have to read a FIXED LENGTH file with FILE ADAPTER. File contains the data in hierarchical structure. Hierarchy in the file is identified by the first 3 characters of every line which could be any of these : 000,001,002,003 and 004. Rest files are followed after these. So structure is like:
    000 -- Header of File. Will come only once in file. Length of this line is 43 characters
    -- 001 -- Sub Header. Child for 000. Can repeat in file. Length of this line is 51 characters
    --- 002 -- Detail record. Child for 001. Can repeat multiple times in given 001. Length of this line is 43 characters 1353
    -- 003 -- Sub Footer record at same level of 001. Will always come once with 001 record. Child for 000. Length of this line is 48 characters
    004 -- Footer of file.At same level of 000. Will come only once in file. Length of this line is 48 characters
    Requirement is to create an XSD which should validate this Hierarchical Structure also i.e data should come in this hierarchy only else raise an error while parsing the file.
    Now while configuring the FILE ADAPTER to read this file we are using Native Schema UI to create the XSD to parse this structure using an example data file. But we are not able to create a valid XSD for this file which should validate the Hierarchy also on the file.
    Pls provide any pointers or solution for this.
    Link to download the file, file structure details and XSD that we have created:
    https://docs.google.com/file/d/0B9mCtbxc3m-oUmZuSWRlUTBIcUE/edit?usp=sharing
    Thanks
    Amit Rattan
    Edited by: user11207269 on May 28, 2013 10:16 PM
    Edited by: user11207269 on May 28, 2013 10:31 PM
    Edited by: user11207269 on May 28, 2013 10:33 PM

    Heloo.. Can anyone help me on this. I need to do Hierarchial read / validation while reading the file using File Adapter using Native XSD schema.

  • Creating Spatial Index using Cross Schema Operations

    All,
    When logged on as SCOTT, I'd like to create a spatial index for JACK.
    I've tried lots of things and googled a lot, but so far I failed.
    I am using Oracle 11g R1
    In the end, what I do is (assuming this should work):
    Logon as scott/tiger
    then
    CREATE USER JACK IDENTIFIED BY JACK;
    GRANT CREATE TABLE TO JACK;
    GRANT DBA TO JACK;
    CREATE TABLE JACK.JTABLE (GEOM MDSYS.SDO_GEOMETRY);
    ALTER SESSION SET CURRENT_SCHEMA = JACK;
    INSERT INTO USER_SDO_GEOM_METADATA (TABLE_NAME, COLUMN_NAME, SRID, DIMINFO) VALUES('JTABLE', 'GEOM', 27700, SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', 1, 671196, 0.5), SDO_DIM_ELEMENT('Y', 6645, 1230275, 0.5)))
    CREATE INDEX JACK.JINDEX ON JACK.JTABLE(GEOM) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
    -- This fails as it can't find any USER_SDO_GEOM_METADATA
    select * from all_sdo_geom_metadata where table_name = 'JTABLE';
    -- Result is 0 records, where is my record?
    select * from mdsys.sdo_geom_metadata_table where sdo_table_name = 'JTABLE';
    -- Returns 1 record, SDO_OWNER = SCOTTNow this is a simplified example, all I want is from a single connection, manage multiple schema's.
    The problem really lies with the INSERT trigger of USER_SDO_GEOM_METADATA
    The view uses the CURRENT_SCHEMA for retrieving data:
    CREATE OR REPLACE FORCE VIEW "MDSYS"."USER_SDO_GEOM_METADATA" ("TABLE_NAME", "COLUMN_NAME", "DIMINFO", "SRID")
    AS
      SELECT SDO_TABLE_NAME TABLE_NAME,
        SDO_COLUMN_NAME COLUMN_NAME,
        SDO_DIMINFO DIMINFO,
        SDO_SRID SRID
      FROM SDO_GEOM_METADATA_TABLE
      WHERE sdo_owner = sys_context('userenv', 'CURRENT_SCHEMA');which is fine, but the INSERT, UPDATE and DELETE triggers use:
    EXECUTE IMMEDIATE 'SELECT user FROM dual' INTO tname;A default installation has only select permision on the table and the all_... view, so I'd not like to change this
    (as we can't change this for every customer installation).
    Any help or advice please.

    Hi
    I must say I have not yet ran into the scenario like yours, nor did I read any specific documentation on it.
    As metadata is (describing) data about the (spatial) data, I would see it as belonging to it, hence also the need to have the schema that holds the (spatial) data is the schema that describes this data.
    But I would rather see the Oracle guys confirming this, as I might be just mistaken completely.
    I quickly tried something like (note the schema.tablename):
    as user scott:
    insert into user_sdo_geom_metadata values
    ('JACK.JTABLE','GEOM', 27700, SDO_DIM_ARRAY(SDO_DIM_ELEMENT('X', 1, 671196, 0.5), SDO_DIM_ELEMENT('Y', 6645, 1230275, 0.5))).
    And then follow the usage as described in section 5.1.2, but this seems to raise other issues for the index creation not being able to read the user_sdo_geom_metadata view (ORA-13203) which seems to indicate that the previous insert is not being a good one I would think.
    The alter schema approach does not help for the reason that the user_sdo_geom_metadata has triggers on it for insert, delete and update.
    If you look at the insert trigger: SDO_GEOM_TRIG_INS1
    you will see that this will do something like:
    'SELECT user FROM dual' into tname;
    to use the user to publish down the sdo_geom_metadata table.
    So for what it is worth I would create the metadata as user Jack and apply 5.1.2.
    Luc

  • Validate a XML file against multiple schema files

    Hello everybody!
    How can I validate a XML file against multiple schema files?
    I have the following XML file:
    <?xml version="1.0" encoding="UTF-8"?>
    <bulkCmConfigDataFile xmlns:es="SpecificAttributes.3.0.xsd"
                             xmlns:xn="genericNrm.xsd"
    xmlns="configData.xsd">
    <fileHeader fileFormatVersion="32.615 V4.2" vendorName=""/>
    <configData dnPrefix="Undefined">
    <xn:SubNetwork id="3G">
    <xn:SubNetwork id="RNC01">
    <xn:MeContext id="RNC01">
    <xn:VsDataContainer id="RNC01">
    <xn:attributes>
    <xn:vsDataType>vsDataMeContext</xn:vsDataType>
    <xn:vsDataFormatVersion>SpecificAttributes.3.0</xn:vsDataFormatVersion>
    <es:vsDataMeContext>
    <es:userLabel>RNC01</es:userLabel>
    <es:ipAddress>172.21.3.17</es:ipAddress>
    <es:neMIMversion>vF.5.0</es:neMIMversion>
    </es:vsDataMeContext>
                                  </xn:attributes>
    </xn:VsDataContainer>
    </xn:MeContext>
    </xn:SubNetwork>
    </xn:SubNetwork>
    </configData>
    <fileFooter dateTime="2006-11-24T11:56:07Z"/>
    </bulkCmConfigDataFile>
    I want to load this file into a table, validate it (against SpecificAttributes.3.0.xsd, genericNrm.xsd and configData.xsd) and query that table. How would the INSERT .. and the SELECT ... for userLabel attribute look like?
    Many thanks!

    Hi Peter,
    Please use the validateXML BPEL Property : This property validates incoming and outgoing XML documents. If set to true, the Oracle BPEL Process Manager applies schema validation for incoming and outgoing XML documents. This property is applicable to both durable and transient processes. The default value is false.
    Cheers
    A

  • Multiple schema, but instance doc does not have namespace

    Is it possible to successfully validate an instance document without namespaces against multiple schema with different namespaces?
    example:
    this is the normal xml which is valid with all the namespace:
    <MyRoot dateTime="040112061834" instanceIdentifier="NTNDEC13541" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://org.schema.ntn" xsi:schemaLocation="http://org.schema.ntn ntn.xsd" xmlns:cbc="http://org.schema.cbc cbc.xsd">
         <cbc:MessageVersion>040</cbc:MessageVersion>
         <cbc:SenderID>2344134</cbc:SenderID>
         <cbc:RecipientID>3485774</cbc:RecipientID>
         <cbc:TotalNumberOfDeclaration>1</cbc:TotalNumberOfDeclaration>
    </MyRoot>but I want to remove all namespace
    <MyRoot>
         <MessageVersion>040</MessageVersion>
         <SenderID>2344134</SenderID>
         <RecipientID>3485774</RecipientID>
         <TotalNumberOfDeclaration>1</TotalNumberOfDeclaration>
    </MyRoot>now is it possible to validate programmatically using sax the above xml against multiple schema? can i do it without any namespace at all??
    thanks

    Hi Larry, Thanks for your reply along with Muppet Mark, If I changed the color settings originally it was not my intension, I was simply just opening the next project to work on and viola’ “missing profiles”.
    Attachment (picture) 7 was my last setting for color prior to your email, and yes I was messing around with this last night comparing it to one of our old G4’s, so I can’t remember where it started out before I got my hands on it.
    Attachment (picture) 8 is my current settings based on your email recommendations.
    Attachment (picture) 6 is the message I receive when I open any files created prior to these changes I just made. However after I open and resave the file I no longer receive this (picture 6) message.
    My question is will my color be back to where it was before the problem started last week. In looking over one of the files I opened (I design/build transportation maps btw) most of the color is where I usually have it set however some colors were fractional as if they were converted from spot colors. SO I’m not sure what to do except go through all the known colors and make sure they are where I usually would set them, or do you have a better suggestion.
    Oh by the way – for all you guys that are helping lunch is on me at McDonalds on Michigan St in Grand Rapids, MI!!
    #7
    #8
    #6

  • Sliding Window Table Partitioning Problems with RANGE RIGHT, SPLIT, MERGE using Multiple File Groups

    There is misleading information in two system views (sys.data_spaces & sys.destination_data_spaces) about the physical location of data after a partitioning MERGE and before an INDEX REBUILD operation on a partitioned table. In SQL Server 2012 SP1 CU6,
    the script below (SQLCMD mode, set DataDrive  & LogDrive variables  for the runtime environment) will create a test database with file groups and files to support a partitioned table. The partition function and scheme spread the test data across
    4 files groups, an empty partition, file group and file are maintained at the start and end of the range. A problem occurs after the SWITCH and MERGE RANGE operations, the views sys.data_spaces & sys.destination_data_spaces show the logical, not the physical,
    location of data.
    --=================================================================================
    -- PartitionLabSetup_RangeRight.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE RIGHT FOR VALUES
    0,
    15,
    30,
    45,
    60
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    --:SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (15);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber  
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The T-SQL code below illustrates the problem.
    -- PartitionLab_RangeRight
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3 ;
    -- ERROR
    --Msg 5042, Level 16, State 1, Line 1
    --The file 'TestTable_f3 ' cannot be removed because it is not empty.
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f2 ;
    -- Works surprisingly!!
    use workspace;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    --Msg 622, Level 16, State 3, Line 2
    --The filegroup "TestTable_fg2" has no files assigned to it. Tables, indexes, text columns, ntext columns, and image columns cannot be populated on this filegroup until a file is added.
    --The statement has been terminated.
    If you run ALTER INDEX REBUILD before trying to remove files from File Group 3, it works. Rerun the database setup script then the code below.
    -- RANGE RIGHT
    -- Rerun PartitionLabSetup_RangeRight.sql before the code below
    USE workspace;
    DROP TABLE dbo.TestTableOut;
    ALTER INDEX [PK_TestTable] ON [dbo].[TestTable] REBUILD PARTITION = 2;
    USE master;
    ALTER DATABASE workspace
    REMOVE FILE TestTable_f3;
    -- Works as expected!!
    The file in File Group 2 appears to contain data but it can be dropped. Although the system views are reporting the data in File Group 2, it still physically resides in File Group 3 and isn’t moved until the index is rebuilt. The RANGE RIGHT function means
    the left file group (File Group 2) is retained when splitting ranges.
    RANGE LEFT would have retained the data in File Group 3 where it already resided, no INDEX REBUILD is necessary to effectively complete the MERGE operation. The script below implements the same partitioning strategy (data distribution between partitions)
    on the test table but uses different boundary definitions and RANGE LEFT.
    --=================================================================================
    -- PartitionLabSetup_RangeLeft.sql
    -- 001. Create test database
    -- 002. Add file groups and files
    -- 003. Create partition function and schema
    -- 004. Create and populate a test table
    --=================================================================================
    USE [master]
    GO
    -- 001 - Create Test Database
    :SETVAR DataDrive "D:\SQL\Data\"
    :SETVAR LogDrive "D:\SQL\Logs\"
    :SETVAR DatabaseName "workspace"
    :SETVAR TableName "TestTable"
    -- Drop if exists and create Database
    IF DATABASEPROPERTYEX(N'$(databasename)','Status') IS NOT NULL
    BEGIN
    ALTER DATABASE $(DatabaseName) SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    DROP DATABASE $(DatabaseName)
    END
    CREATE DATABASE $(DatabaseName)
    ON
    ( NAME = $(DatabaseName)_data,
    FILENAME = N'$(DataDrive)$(DatabaseName)_data.mdf',
    SIZE = 10,
    MAXSIZE = 500,
    FILEGROWTH = 5 )
    LOG ON
    ( NAME = $(DatabaseName)_log,
    FILENAME = N'$(LogDrive)$(DatabaseName).ldf',
    SIZE = 5MB,
    MAXSIZE = 5000MB,
    FILEGROWTH = 5MB ) ;
    GO
    -- 002. Add file groups and files
    --:SETVAR DatabaseName "workspace"
    --:SETVAR TableName "TestTable"
    --:SETVAR DataDrive "D:\SQL\Data\"
    --:SETVAR LogDrive "D:\SQL\Logs\"
    DECLARE @nSQL NVARCHAR(2000) ;
    DECLARE @x INT = 1;
    WHILE @x <= 6
    BEGIN
    SELECT @nSQL =
    'ALTER DATABASE $(DatabaseName)
    ADD FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';
    ALTER DATABASE $(DatabaseName)
    ADD FILE
    NAME= ''$(TableName)_f' + CAST(@x AS CHAR(5)) + ''',
    FILENAME = ''$(DataDrive)\$(TableName)_f' + RTRIM(CAST(@x AS CHAR(5))) + '.ndf''
    TO FILEGROUP $(TableName)_fg' + RTRIM(CAST(@x AS CHAR(5))) + ';'
    EXEC sp_executeSQL @nSQL;
    SET @x = @x + 1;
    END
    -- 003. Create partition function and schema
    --:SETVAR TableName "TestTable"
    --:SETVAR DatabaseName "workspace"
    USE $(DatabaseName);
    CREATE PARTITION FUNCTION $(TableName)_func (int)
    AS RANGE LEFT FOR VALUES
    -1,
    14,
    29,
    44,
    59
    CREATE PARTITION SCHEME $(TableName)_scheme
    AS
    PARTITION $(TableName)_func
    TO
    $(TableName)_fg1,
    $(TableName)_fg2,
    $(TableName)_fg3,
    $(TableName)_fg4,
    $(TableName)_fg5,
    $(TableName)_fg6
    -- Create TestTable
    --:SETVAR TableName "TestTable"
    --:SETVAR BackupDrive "D:\SQL\Backups\"
    --:SETVAR DatabaseName "workspace"
    CREATE TABLE [dbo].$(TableName)(
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_scheme(Partition_PK)
    ) ON $(TableName)_scheme(Partition_PK)
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_GUID_PK] DEFAULT (newid()) FOR [GUID_PK]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateDate] DEFAULT (getdate()) FOR [CreateDate]
    ALTER TABLE [dbo].$(TableName) ADD CONSTRAINT [DF_$(TableName)_CreateServer] DEFAULT (@@servername) FOR [CreateServer]
    -- 004. Create and populate a test table
    -- Load TestTable Data - Seconds 0-59 are used as the Partitoning Key
    --:SETVAR TableName "TestTable"
    SET NOCOUNT ON;
    DECLARE @Now DATETIME = GETDATE()
    WHILE @Now > DATEADD(minute,-1,GETDATE())
    BEGIN
    INSERT INTO [dbo].$(TableName)
    ([Partition_PK]
    ,[RandomNbr])
    VALUES
    DATEPART(second,GETDATE())
    ,ROUND((RAND() * 100),0)
    END
    -- Confirm table partitioning - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    --=================================================================================
    -- SECTION 2 - SWITCH OUT
    -- 001 - Create TestTableOut
    -- 002 - Switch out partition in range 0-14
    -- 003 - Merge range 0 -29
    -- 001. TestTableOut
    :SETVAR TableName "TestTable"
    IF OBJECT_ID('dbo.$(TableName)Out') IS NOT NULL
    DROP TABLE [dbo].[$(TableName)Out]
    CREATE TABLE [dbo].[$(TableName)Out](
    [Partition_PK] [int] NOT NULL,
    [GUID_PK] [uniqueidentifier] NOT NULL,
    [CreateDate] [datetime] NULL,
    [CreateServer] [nvarchar](50) NULL,
    [RandomNbr] [int] NULL,
    CONSTRAINT [PK_$(TableName)Out] PRIMARY KEY CLUSTERED
    [Partition_PK] ASC,
    [GUID_PK] ASC
    ) ON $(TableName)_fg2;
    GO
    -- 002 - Switch out partition in range 0-14
    --:SETVAR TableName "TestTable"
    ALTER TABLE dbo.$(TableName)
    SWITCH PARTITION 2 TO dbo.$(TableName)Out;
    -- 003 - Merge range 0 - 29
    :SETVAR TableName "TestTable"
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (14);
    -- Confirm table partitioning
    -- Original source of this query - http://lextonr.wordpress.com/tag/sys-destination_data_spaces/
    SELECT
    N'DatabaseName' = DB_NAME()
    , N'SchemaName' = s.name
    , N'TableName' = o.name
    , N'IndexName' = i.name
    , N'IndexType' = i.type_desc
    , N'PartitionScheme' = ps.name
    , N'DataSpaceName' = ds.name
    , N'DataSpaceType' = ds.type_desc
    , N'PartitionFunction' = pf.name
    , N'PartitionNumber' = dds.destination_id
    , N'BoundaryValue' = prv.value
    , N'RightBoundary' = pf.boundary_value_on_right
    , N'PartitionFileGroup' = ds2.name
    , N'RowsOfData' = p.[rows]
    FROM
    sys.objects AS o
    INNER JOIN sys.schemas AS s
    ON o.[schema_id] = s.[schema_id]
    INNER JOIN sys.partitions AS p
    ON o.[object_id] = p.[object_id]
    INNER JOIN sys.indexes AS i
    ON p.[object_id] = i.[object_id]
    AND p.index_id = i.index_id
    INNER JOIN sys.data_spaces AS ds
    ON i.data_space_id = ds.data_space_id
    INNER JOIN sys.partition_schemes AS ps
    ON ds.data_space_id = ps.data_space_id
    INNER JOIN sys.partition_functions AS pf
    ON ps.function_id = pf.function_id
    LEFT OUTER JOIN sys.partition_range_values AS prv
    ON pf.function_id = prv.function_id
    AND p.partition_number = prv.boundary_id
    LEFT OUTER JOIN sys.destination_data_spaces AS dds
    ON ps.data_space_id = dds.partition_scheme_id
    AND p.partition_number = dds.destination_id
    LEFT OUTER JOIN sys.data_spaces AS ds2
    ON dds.data_space_id = ds2.data_space_id
    ORDER BY
    DatabaseName
    ,SchemaName
    ,TableName
    ,IndexName
    ,PartitionNumber
    The table below shows the results of the ‘Confirm Table Partitioning’ query, before and after the MERGE.
    The data in the File and File Group to be dropped (File Group 2) has already been switched out; File Group 3 contains the data so no index rebuild is needed to move data and complete the MERGE.
    RANGE RIGHT would not be a problem in a ‘Sliding Window’ if the same file group is used for all partitions, when they are created and dropped it introduces a dependency on full index rebuilds. Larger tables are typically partitioned and a full index rebuild
    might be an expensive operation. I’m not sure how a RANGE RIGHT partitioning strategy could be implemented, with an ascending partitioning key, using multiple file groups without having to move data. Using a single file group (multiple files) for all partitions
    within a table would avoid physically moving data between file groups; no index rebuild would be necessary to complete a MERGE and system views would accurately reflect the physical location of data. 
    If a RANGE RIGHT partition function is used, the data is physically in the wrong file group after the MERGE assuming a typical ascending partitioning key, and the 'Data Spaces' system views might be misleading. Thanks to Manuj and Chris for a lot of help
    investigating this.
    NOTE 10/03/2014 - The solution
    The solution is so easy it's embarrassing, I was using the wrong boundary points for the MERGE (both RANGE LEFT & RANGE RIGHT) to get rid of historic data.
    -- Wrong Boundary Point Range Right
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (15);
    -- Wrong Boundary Point Range Left
    --ALTER PARTITION FUNCTION $(TableName)_func()
    --MERGE RANGE (14);
    -- Correct Boundary Pounts for MERGE
    ALTER PARTITION FUNCTION $(TableName)_func()
    MERGE RANGE (0); -- or -1 for RANGE LEFT
    The empty, switched out partition (on File Group 2) is then MERGED with the empty partition maintained at the start of the range and no data movement is necessary. I retract the suggestion that a problem exists with RANGE RIGHT Sliding Windows using multiple
    file groups and apologize :-)

    Hi Paul Brewer,
    Thanks for your post and glad to hear that the issue is resolved. It is kind of you post a reply to share your solution. That way, other community members could benefit from your sharing.
    Regards.
    Sofiya Li
    Sofiya Li
    TechNet Community Support

Maybe you are looking for

  • Outlook Calender and iphone sync not working properly (windows 7 os)

    I try to sync through itunes and contacts work fine, but the calender only comes through partially. Single events work fine, but events (on outlook) which extend more than a day are completely ignored by itunes sync and don't show up on the iphone ca

  • Help ipad is disable from too many attempts on passcode

    I forgot my passcode to my ipad after changing it from the four digit number to a typed word.  I tried so many times that the ipad now says: "ipad is disabled connect to itunes."  In need of help to enable my ipad because once i connected it to my co

  • Exporting files for use in FCP 7

    I'm working on an animated project right now where I draw individual images in Photoshop, save them as PNGs, and then bring them into AE CS4 as image sequences. I now have a bunch of individual scenes that I want to trim and edit in FCP 7, but I keep

  • Any difference in the 3G interface with iTunes V8?

    Just wondering if there was any significant difference in the way iTunes 8 interfaces with the iPhone 3G?

  • OC4J's servlet and Weblogic's EJB problem

    Hi all, Is it possible to look up the weblogic.jndi.WLInitialContextFactory from servlet deployed into OC4J ? I copied weblogic.jar and MyEJB.jar to \j2ee\home\lib and re-started OC4J. When the servlet runs for the first time, I can see the following