SQL Modeler: Logical Model Merge
Hi all,
I'll like to consolidate and merge two (and more) logical models into one single model. We have design teams that work separately and we need to consolidate their logical models into one. Is this possible? If yes, how can I do it? If not, will be added in a future release? Is there any workaround I could use in the meantime?
Thanks.
Thanks Philip.
All the while I did not see the Apply button at the bottom of the screen and so could not complete the process.
Thanks once again.
Chiedu
Similar Messages
-
SQL*Modeler - creation of VARRAY or Collection Type of scalar type errors
In SQl*Modeler 2.0.0 Build 584, I can create either VARRAY or Collections. These work fine for usre defined structured types but I encounter huge problems when I use a simple scalar type of number or varchar2.
For instance I create a new collection type, give it a name, specify its a collection (or VARRAY, same problem) then click datatype. On the select Data type box I select logical type. A new window opens, I select VARCHAR from the drop down list.
Enter the size 15, everything appears fine. I click OK, the select data type screen now shows a logical type of VARCHAR(15).
So far I'm happy. I generate the DDL, everthing is fine, the DDL contains my collection of VARCHAR2(15).
Now I save the model, close it and re-open the same model. My collection is now of VARCHAR so the next time I generate it will get an error because the syntax is wrong because it has no length. Same problem happens when selecting a NUMBER, looses the precision and scale but at least that command still works, just with a maximum numeric value.
Ok, so lets try creating distinct types. Why we can't access domains when specifying types from here remains a mystery to me.
So I create a distinct type Varchar2_15 which is of Logical type VARCHAR and give it a size. Similarly, create another distinct type of NUMERIC_22_0 precision 22 scale 0. This seems to get around the problem of losing the data but the DDL generated shows the datatype to be either VARCHAR (not VARCHAR2) and NUMERIC(22), not number(22). Now I know that VARCHAR currently maps to VARCHAR2 but isn't guaranteed to in the future (even though its been like that since V6) and NUMERIC is just an alias for NUMBER but its going to confuse a lot of java people and its totally inconsitent and just plain wrong.
Any suggestions or workarounds will be gratefully received.
Ian BainbridgeHi Ian,
I see a bug in save/load of collection types and as result no size or precision and scale information. It's fixed in new release.
However I cannot reproduce the problem with distinct types - I have them generated as varchar2 and number (this is for Oracle).
You can check:
- database you use in DDL generation - I got varchar and numeric for MS SQL Server;
- mapping of logical types VARCHAR and NUMERIC to native types in "Types Administration".
Philip
PS - I was able to reproduce it - I looked at wrong place - DDL generation for collection types is broken - it's ok for columns. I logged bug for that.
Edited by: Philip Stoyanov on Jun 28, 2010 8:55 PM -
URGENT-- cannot see sql modeler contents (xml file)
All,
I have tried to open up a model which has been developed with earlier versions of sql modeler.
below, I have copied the contents of the calling XML file for your comments,.
it seems like 2.xx version and somehow it shows empty.! I searched through the sub folders and there are 100s of xml files.
I tried 1.5.x and latest version of sql modeler no luck so far. what am I missing or any workaround please..
thanks
ali
========================
*<?xml version="1.0" encoding="UTF-8" ?>*
*<model version="2.0">*
* <version version="3.2" design_id="C6FFE67F-9E2B-07CE-8B7A-C790E645720B" />*
* <object>*
* <comment></comment>*
* <notes></notes>*
* <alter type="created">*
* <user>hdai</user>*
* <timestamp>2009-05-31 08:30:30</timestamp>*
</alter>
<alter type="changed">
<user>hdai</user>
<timestamp>2009-06-22 10:17:09</timestamp>
</alter>
</object>
<engineering_params delete_without_origin="false" engineer_coordinates="true" engineer_generated="true" show_engineering_intree="false" apply_naming_std="false" use_pref_abbreviation="true" upload_directory="" />
<eng_compare show_sel_prop_only="true" not_apply_for_new_objects="true" exclude_from_tree="false">
<entity_table>
<property name="Name" selected="true" />
<property name="Short Name / Abbreviation" selected="true" />
<property name="Comment" selected="true" />
<property name="Comment in RDBMS" selected="true" />
<property name="Notes" selected="true" />
<property name="TempTable Scope" selected="true" />
<property name="Table Type" selected="true" />
<property name="Structured Type" selected="true" />
<property name="Type Substitution (Super-Type Object)" selected="true" />
<property name="Min Volumes" selected="true" />
<property name="Expected Volumes" selected="true" />
<property name="Max Volumes" selected="true" />
<property name="Growth Percent" selected="true" />
<property name="Growth Type" selected="true" />
<property name="Normal Form" selected="true" />
<property name="Adequately Normalized" selected="true" />
</entity_table>
<attribute_column>
<property name="Name" selected="true" />
<property name="Data Type" selected="true" />
<property name="Data Type Kind" selected="true" />
<property name="Mandatory" selected="true" />
<property name="Default Value" selected="true" />
<property name="Use Domain Constraint" selected="true" />
<property name="Comment" selected="true" />
<property name="Comment in RDBMS" selected="true" />
<property name="Notes" selected="true" />
<property name="Source Type" selected="true" />
<property name="Formula Description" selected="true" />
<property name="Type Substitution" selected="true" />
<property name="Scope" selected="true" />
</attribute_column>
<key_index>
<property name="Name" selected="true" />
<property name="Comment" selected="true" />
<property name="Comment in RDBMS" selected="true" />
<property name="Notes" selected="true" />
<property name="Primary Key" selected="true" />
<property name="Attributes/Columns" selected="true" />
</key_index>
<relation_fk>
<property name="Name" selected="true" />
<property name="Comment" selected="true" />
<property name="Comment in RDBMS" selected="true" />
<property name="Notes" selected="true" />
</relation_fk>
<entityview_view>
<property name="Name" selected="true" />
<property name="Comment" selected="true" />
<property name="Comment in RDBMS" selected="true" />
<property name="Notes" selected="true" />
<property name="Structured Type" selected="true" />
<property name="Where" selected="true" />
<property name="Having" selected="true" />
<property name="User Defined SQL" selected="true" />
</entityview_view>
</eng_compare>
<changerequests id="" />
<design type="Data Types" id="2391D8C8-FEC6-9182-ED6C-6EAE53428D29" path_id="0" main_view_id="420C75F3-BD6B-8126-6EBA-050A1FC9BE54" should_be_open="true" is_visible="false">
<name>DataTypes</name>
<comment></comment>
<notes></notes>
</design>
<design type="LogicalDesign" id="4A0EEC77-6A89-10AC-351F-172AA6998230" path_id="0" main_view_id="83AB3B8B-90C4-D6A9-291D-A65F781E85D9" should_be_open="true" is_visible="true">
<name>Logical</name>
<comment></comment>
<notes></notes>
</design>
<design type="RelationalModel" id="1214B08E-6B1C-9449-2769-131F7637FD5C" path_id="1" main_view_id="11A685B9-D950-8186-6160-DDAD08DAC3C0" should_be_open="true" is_visible="false">
<name>SMARTD</name>
<comment></comment>
<notes></notes>
</design>
<design type="RelationalModel" id="A61B461D-F897-0CBF-3924-4BC3D33F3D13" path_id="2" main_view_id="2F6C7BB5-3F13-8657-DD84-BAFEF664D2D8" should_be_open="true" is_visible="false">
<name>SMARTD(2)</name>
<comment></comment>
<notes></notes>
</design>
<design type="Process Model" id="7B21085F-9F75-F895-17AA-32592475602A">
<name>Process Model</name>
<comment></comment>
<notes></notes>
</design>
<design type="Business Information" id="C0C1E8B7-7CDE-F345-C446-412C87D12639" path_id="0" main_view_id="null" should_be_open="true" is_visible="false">
<name>Business Information</name>
<comment></comment>
<notes></notes>
</design>
<domains>
<domain name="defaultdomains" role="uses" />
</domains>
</model>Philip:
here is the Log file ( running modeler 2.0). hope it helps.
thx
ali
================LOG file==========
2009-12-02 13:53:34,312 [main] INFO ApplicationView - Oracle SQL Developer Data Modeler Version: 2.0.0 Build: 570
2009-12-02 13:53:36,890 [main] WARN AbstractXMLReader - There is no file with default domains (path: domains name: defaultdomains)
2009-12-02 13:54:58,390 [Thread-3] ERROR FileManager - getDataInputStream: Can not read data
java.io.FileNotFoundException: D:\1D_data\LADPSS\docs\reference\data_model\data_model\relational\1214B08E-6B1C-9449-2769-131F7637FD5C.xml (The system cannot find the path specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at oracle.dbtools.crest.model.persistence.FileManager.getDataInputStream(Unknown Source)
at oracle.dbtools.crest.model.persistence.FileManager.getDataInputStreamWithoutExtension(Unknown Source)
at oracle.dbtools.crest.model.persistence.XMLPersistenceManager.getInputStreamFor(Unknown Source)
at oracle.dbtools.crest.model.persistence.xml.AbstractXMLReader.getInputStreamFor(Unknown Source)
at oracle.dbtools.crest.model.persistence.xml.AbstractXMLReader.recreateDesign(Unknown Source)
at oracle.dbtools.crest.model.design.relational.RelationalDesign.load(Unknown Source)
at oracle.dbtools.crest.model.design.Design.openDesign(Unknown Source)
at oracle.dbtools.crest.swingui.ControllerApplication$OpenDesign$2.run(Unknown Source)
2009-12-02 13:54:58,390 [Thread-3] ERROR AbstractXMLReader - Data inputstream is null (path: data_model/relational name: 1214B08E-6B1C-9449-2769-131F7637FD5C)
2009-12-02 13:54:58,390 [Thread-3] ERROR FileManager - getDataInputStream: Can not read data
java.io.FileNotFoundException: D:\1D_data\LADPSS\docs\reference\data_model\data_model\relational\A61B461D-F897-0CBF-3924-4BC3D33F3D13.xml (The system cannot find the path specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at oracle.dbtools.crest.model.persistence.FileManager.getDataInputStream(Unknown Source)
at oracle.dbtools.crest.model.persistence.FileManager.getDataInputStreamWithoutExtension(Unknown Source)
at oracle.dbtools.crest.model.persistence.XMLPersistenceManager.getInputStreamFor(Unknown Source)
at oracle.dbtools.crest.model.persistence.xml.AbstractXMLReader.getInputStreamFor(Unknown Source)
at oracle.dbtools.crest.model.persistence.xml.AbstractXMLReader.recreateDesign(Unknown Source)
at oracle.dbtools.crest.model.design.relational.RelationalDesign.load(Unknown Source)
at oracle.dbtools.crest.model.design.Design.openDesign(Unknown Source)
at oracle.dbtools.crest.swingui.ControllerApplication$OpenDesign$2.run(Unknown Source)
2009-12-02 13:54:58,390 [Thread-3] ERROR AbstractXMLReader - Data inputstream is null (path: data_model/relational name: A61B461D-F897-0CBF-3924-4BC3D33F3D13)
2009-12-02 13:54:58,390 [Thread-3] ERROR FileManager - getDataInputStream: Can not read data
java.io.FileNotFoundException: D:\1D_data\LADPSS\docs\reference\data_model\data_model\processmodel\7B21085F-9F75-F895-17AA-32592475602A.xml (The system cannot find the path specified)
at java.io.FileInputStream.open(Native Method)
at java.io.FileInputStream.<init>(FileInputStream.java:106)
at oracle.dbtools.crest.model.persistence.FileManager.getDataInputStream(Unknown Source)
at oracle.dbtools.crest.model.persistence.FileManager.getDataInputStreamWithoutExtension(Unknown Source)
at oracle.dbtools.crest.model.persistence.XMLPersistenceManager.getInputStreamFor(Unknown Source)
at oracle.dbtools.crest.model.persistence.xml.AbstractXMLReader.getInputStreamFor(Unknown Source)
at oracle.dbtools.crest.model.persistence.xml.AbstractXMLReader.recreateDesign(Unknown Source)
at oracle.dbtools.crest.model.design.process.ProcessModel.load(Unknown Source)
at oracle.dbtools.crest.model.design.Design.openDesign(Unknown Source)
at oracle.dbtools.crest.swingui.ControllerApplication$OpenDesign$2.run(Unknown Source)
2009-12-02 13:54:58,390 [Thread-3] ERROR AbstractXMLReader - Data inputstream is null (path: data_model/processmodel name: 7B21085F-9F75-F895-17AA-32592475602A) -
SQL modeler can not import from data dictionary
It was very frustruted to see that the SQL Modeler hang in import from data dictionary of a database as part of revise engineering. I have to question myself if sql modeler is a serious tool and should I give up.
I am not sure if Data Modeller is still in Beta./Production. First couple off initial versions of a new product are normally buggy.
Regards
Alternatively, If this product is still in Beta, then you can contact the development team or report the issue so that they can take care of this issue in next beta release.
Edited by: skvaish1 on Mar 30, 2010 3:18 PM
Edited by: skvaish1 on Mar 30, 2010 3:26 PM -
I have the privilege of performing a very tedious task.
We have some home grown regular expressions in our company. I now need to expand these regular expressions.
Samples:
a = 0-3
b = Null, 0, 1
Expression: Meaning
1:5: 1,2,3,4,5
1a: 10, 11, 12, 13
1b: 1, 10, 11
1[2,3]ab: 120, 1200, 1201, ....
It get's even more inetersting because there is a possibility of 1[2,3]a.ab
I have created two base queries to aid me in my quest. I am using the SQL MODEL clause to solve this problem. I pretty confident that I should be able to convert evrything into a range and the use one of the MODEL clause listed below.
My only confusion is how do I INCREMENT dynamically. The INCREMENT seems to be a constant in both a FOR and ITERATE statement. I need to figure a way to increment with .01, .1, etc.
Any help will be greatly appreciated.
CODE:
Reference: http://www.sqlsnippets.com/en/topic-11663.html
Objective: Expand a range with ITERATE
WITH t AS
(SELECT '2:4' pt
FROM DUAL
UNION ALL
SELECT '6:9' pt
FROM DUAL)
SELECT pt AS code_expression
-- , KEY
-- , min_key
-- , max_key
, m_1 AS code
FROM t
MODEL
PARTITION BY (pt)
DIMENSION BY ( 0 AS KEY )
MEASURES (
0 AS m_1,
TO_NUMBER(SUBSTR(pt, 1, INSTR(pt, ':') - 1)) AS min_key,
TO_NUMBER(SUBSTR(pt, INSTR(pt, ':') + 1)) AS max_key
RULES
-- UPSERT
ITERATE (100000) UNTIL ( ITERATION_NUMBER = max_key[0] - min_key[0] )
m_1[ITERATION_NUMBER] = min_key[0] + ITERATION_NUMBER
ORDER BY pt, m_1
Explanation:
Line numbers are based on the assupmtion that "WITH t AS" starts at line 5.
If you need detailed information regarding the MODEL clause please refer to
the Refrence site stated above or read some documentation.
Partition-
Line 18: PARTITION BY (pt)
This will make sure that each "KEY" will start at 0 for each value of pt.
Dimension-
Line 19: DIMENSION BY ( 0 AS KEY )
This is necessary for the refrences max_key[0], and min_key[0] to work.
Measures-
Line 21: 0 AS m_1
A space holder for new values.
Line 22: TO_NUMBER(SUBSTR(pt, 1, INSTR(pt, ':') - 1)) AS min_key
The result is '1' for '1:5'.
Line 23: TO_NUMBER(SUBSTR(pt, INSTR(pt, ':') + 1)) AS max_key
The result is '5' for '1:5'.
Rules-
Line 26: UPSERT
This makes it possible for new rows to be created.
Line 27: ITERATE (100000) UNTIL ( ITERATION_NUMBER = max_key[0] - min_key[0] )
This reads ITERATE 100000 times or UNTIL the ITERATION_NUMBER = max_key[0] - min_key[0]
which would be 4 for '1:5', but since the ITERATION_NUMBER starts at 0, whatever follows
is repaeted 5 times.
Line 29: m_1[ITERATION_NUMBER] = min_key[0] + ITERATION_NUMBER
m_1[ITERATION_NUMBER] means m_1[Value of Dimension KEY].
Thus for each row of KEY the m_1 is min_key[0] + ITERATION_NUMBER.
Reference: http://www.sqlsnippets.com/en/topic-11663.html
Objective: Expand a range using FOR
WITH t AS
(SELECT '2:4' pt
FROM DUAL
UNION ALL
SELECT '6:9' pt
FROM DUAL)
, base AS
SELECT pt AS code_expression
, KEY AS code
, min_key
, max_key
, my_increment
, m_1
FROM t
MODEL
PARTITION BY (pt)
DIMENSION BY ( CAST(0 AS NUMBER) AS KEY )
MEASURES (
CAST(NULL AS CHAR) AS m_1,
TO_NUMBER(SUBSTR(pt, 1, INSTR(pt, ':') - 1)) AS min_key,
TO_NUMBER(SUBSTR(pt, INSTR(pt, ':') + 1)) AS max_key,
.1 AS my_increment
RULES
-- UPSERT
m_1[FOR KEY FROM min_key[0] TO max_key[0] INCREMENT 1] = 'Y'
ORDER BY pt, KEY, m_1
SELECT code_expression, code
FROM base
WHERE m_1 = 'Y'
Explanation:
Line numbers are based on the assupmtion that "WITH t AS" starts at line 5.
If you need detailed information regarding the MODEL clause please refer to
the Refrence site stated above or read some documentation.
Partition-
Line 21: PARTITION BY (pt)
This will make sure that each "KEY" will start at 0 for each value of pt.
Dimension-
Line 22: DIMENSION BY ( 0 AS KEY )
This is necessary for the refrences max_key[0], and min_key[0] to work.
Measures-
Line 24: CAST(NULL AS CHAR) AS m_1
A space holder for results.
Line 25: TO_NUMBER(SUBSTR(pt, 1, INSTR(pt, ':') - 1)) AS min_key
The result is '1' for '1:5'.
Line 26: TO_NUMBER(SUBSTR(pt, INSTR(pt, ':') + 1)) AS max_key
The result is '5' for '1:5'.
Line 27: .1 AS my_increment
The INCREMENT I would like to use.
Rules-
Line 30: UPSERT
This makes it possible for new rows to be created.
However seems like it is not necessary.
Line 32: m_1[FOR KEY FROM min_key[0] TO max_key[0] INCREMENT 1] = 'Y'
Where the KE value is between min_key[0] and max_key[0] set the value of m_1 to 'Y'
*/Of course, you can accomplish the same thing without MODEL using an Integer Series Generator like this.
create table t ( min_val number, max_val number, increment_size number );
insert into t values ( 2, 3, 0.1 );
insert into t values ( 1.02, 1.08, 0.02 );
commit;
create table integer_table as
select rownum - 1 as n from all_objects where rownum <= 100 ;
select
min_val ,
increment_size ,
min_val + (increment_size * n) as val
from t, integer_table
where
n between 0 and ((max_val - min_val)/increment_size)
order by 3
MIN_VAL INCREMENT_SIZE VAL
1.02 .02 1.02
1.02 .02 1.04
1.02 .02 1.06
1.02 .02 1.08
2 .1 2
2 .1 2.1
2 .1 2.2
2 .1 2.3
2 .1 2.4
2 .1 2.5
2 .1 2.6
2 .1 2.7
2 .1 2.8
2 .1 2.9
2 .1 3
15 rows selected.--
Joe Fuda
http://www.sqlsnippets.com/ -
SQL*Modeler generation DDL sequence
SQL*Modeler Version 2.0.0. Build 584
In my data model I have a dozen or so External tables and corresponding PUBLIC synonyms to each of them.
The generated script from SQL*Modleler is sequenced by types, tables,views, sequences, synonyms(both public and private), directories, external tables, triggers (and probably some other things that I am not using).
So at the end of running my generation script I always check for rows in DBA_OBJECTS for rows with status='INVALID', hoping for none of course. But because synonyms are created before my external tables the synonyms are invalid. No big deal, as soon as I access the synonym it will recompile anyway but it's a bit disconcerting for the DBA's implementing this model to have invalid objects.
Is there a good reason why synonyms are generated before External tables or is this just an oversight?
I also have a similar problem with types which are used cross schema because there is no way to grant the schema a privilege on a type ( I use an end of script option against the first table to do this but of course the types were invalid up until the point of granting execute on the parent type which recompiles the type but again leaves the synonym invalid). This could only be solved by having an option against the type to grant privileges as per tables. Would be nice to have.
Regards
IanHello,
Is there a way to force Data Modeler (I am using version 4.0.2.840) to put a schema name in front of a table name that a trigger is defined on?
Unfortunately there is not. I've logged a bug on this.
Thanks for reporting the problem.
David -
Sql*modeler loses size against a Structured Type.
SQL*Modeler Version 2.0.0. Build 584
I can create a structured type and save it. Then go back to that type and use the up and down down buttons to move an attribute from one place to another. After moving an attribute it loses the size or precision for the one you have been moving around. If you then cancel the changes the data is still lost.
Similarly, when I click on a attribute I can see the size of say VARCHAR(4) in the left side of the window but in the right side it does not display any value in the size. If I change the name of the attribute then the size is lost.
Seems like a little bug or two here me thinks......
Regards
IanThanks Ian,
it's fixed.
Regards,
Philip -
SQL*Modeler - Granting Privileges on Directories and Types
SQL*Modeler 2.0.0 Build 584
I see no way against a Directory to specify grants to this directory. It would be nice to have the ability to add privileges by user in the same way as for tables/view/sequences etc or have a script option similar to that for tables which could be executed after the creation of a directory to grant the relevant users read/write privileges.
Same thing applies to Types, Collections, External tables.
At the moment I usethe first table to run an END OF SCRIPT which contains these thing butthese have no visibility or validation.
On external tables I see no way to specify NOLOGGING. If I had this option I would not have to grant my users WRITE access to the directory since log files would not be produced each time an external table is queried.
Well while I'm at it might as well shoot for the moon......
I have a requirement to grant an ORACLE supplied Role but I do't want to import it into the model. It would be nice to have the Oracle standard roles available to the model. Alternatively what about an after script for the Use as per tables.
One futher requirement is to grant execute to users to access ORACLE supplied packages (specifically DBMS_LOCK). Again an after script for a user would alleviate this but give it visibility against the user.
I live in hope.....
Regards
Ian BainbridgeOn external tables I see no way to specify NOLOGGING. If I had this option I would not have to grant my users WRITE access to the directory since log files would not be produced each time an external table is queried.This can be done by including NOLOGFILE in the Opaque Format Spec.
For example, if the Access Driver is ORACLE_LOADER, the Opaque Format Spec
could be set to:
RECORDS DELIMITED BY NEWLINE NOLOGFILE
If the Access Driver is ORACLE_DATAPUMP, the Opaque Format Spec could be set
to
NOLOGFILE -
SQl*Modeler - foreign key amendment error
SQl*Modeler 2.0.0 Build 584 (Windows)
Oracle 10.2.0.4 Solaris
If I change an existing foreign key to simply change the delete rule (from say CASCADE to RESTRICT) it changes in memory just fine. If I generate the DDL the change has happened and I get the expected DDL.
However, I then save the model, exit SQL*Modeler, start SQL*Modeler and check the foreign key the change has been lost.
I can workaround this by changing the name of the foreign key and the change is saved correctly. Feature ?
IanHi Ian,
I logged bug for that.
You can use foreign key dialog to do that change without need to change the name - just double-click on FK line or presentation in the browser.
Philip -
SQL*Modeler forgets check constraint names
SQL*Modeler Version 2.0.0. Build 584
If I create named check constraints under the Relational Model/Tables/Table level Constraints then generate the model I have the constraints correctly named. Exit the model and reload, the constraints appear to be correct but when I look at the physical model under tables/table check constraints I see generated names such as TCC4, TCC5 etc.
At this point I can change the name but this is not stored it is just thown away.
If I go back to Relational Model/Tables/Table level Constraints and simply Apply no changes then the constraint names appear correctly in the generated model. However, I have many tables and this is not a practial solution each time I generate the model.Hi Ian,
thanks for feedback. Fix will be available in next release.
Philip -
Issues with new tables created in SQL Modeler
Hi,
whenever I create new tables in SQL Modeler, and then on the Oracle DB using the generatedDDL , strange things happen:
When synchronizing the model with the Database, some tables, even if already existing on the DB, appear as missing and the generated DDL still contains the "CREATE TABLE ..." statement
When synchronizing the Database with the model, the new tables created on the DB appear twice, once with the correct name and the 2nd time with the original name and the suffix "v1".
There seems to be no way to properly synchronize the model with the DB, apart from removing the tables from the model and re-import the definitions from the DB.
What am I doing wrong ? It seems really strange to me that no one has ever reported a bug like this!
Thank youHi,
whenever I create new tables in SQL Modeler, and then on the Oracle DB using the generatedDDL
There seems to be no way to properly synchronize the model with the DB, apart from removing the tables from the model and re-import the definitions from the DB
I'm able to reproduce your problem in case tables are imported into model using more than one connections. You need to use "Sync New Objects" in compare models dialog and then there will be no need to delete and import
again tables from database. Unfortunately "Sync New Objects" functionality doesn't work in case objects are imported using more then one connections. I logged a bug for that.
Philip -
Calc Previous YTD periods in SQL script logic
Hi, experts!
I need help in writing SQL script logic. I want to calculate such logic
(Account = Acc3, TIME = 2009.FEB ) = (Account = Acc1, SUM(TIME = (2009.JAN; 2009.FEB))) - (Account = Acc2, SUM(TIME = (2009.JAN)))
(Account = Acc3, TIME = 2009.MAR ) = (Account = Acc1, SUM(TIME = (2009.JAN; 2009.FEB; 2009.MAR))) - (Account = Acc2, SUM(TIME = (2009.JAN; 2009.FEB)))
..... and so on
Thanks all for help.Hi Petar,
Thanks for your advice.
Legalapp in my Apshell do not have any account transformation rules. But looking at the account transformation table, I can see it requires dimension type datasrc and subtable for datasource and flow fields. But currently my application is similar to the Finance application from Apshell and do not have these dimension types.
Still I created a rule in account transformation table by having these fields blank and the validation was successful. Then I used SPRUNCALCACCOUNT in default.LGL file to trigger this rule but I don't see this working.
Can you help me on below questions:
1. Is it necessary to have datasrc and subtable dimensions to create account transformation rule?
2. Will SPRUNCALCACCOUNT write the values to WB table or should I use COMMIT or any other command along with his to have the values written?
3. Is there any way/place where I can get the examples of Account Transformation rules ?
Thanks
Sharath -
SQL-Model-Clause / Example 2 in Data Warehousing Guide 11G/Chapter 24
Hi SQL-Experts
I have a RH 5.7/Oracle 11.2-Environment!
The sample schemas are installed!
I executed as in Example 2 in Data Warehousing Guide 11G/Chapter 24:
CREATE TABLE currency (
country VARCHAR2(20),
year NUMBER,
month NUMBER,
to_us NUMBER);
INSERT INTO currency
(SELECT distinct
SUBSTR(country_name,1,20), calendar_year, calendar_month_number, 1
FROM countries
CROSS JOIN times t
WHERE calendar_year IN (2000,2001,2002)
UPDATE currency set to_us=.74 WHERE country='Canada';and then:
WITH prod_sales_mo AS --Product sales per month for one country
SELECT country_name c, prod_id p, calendar_year y,
calendar_month_number m, SUM(amount_sold) s
FROM sales s, customers c, times t, countries cn, promotions p, channels ch
WHERE s.promo_id = p.promo_id AND p.promo_total_id = 1 AND
s.channel_id = ch.channel_id AND ch.channel_total_id = 1 AND
s.cust_id=c.cust_id AND
c.country_id=cn.country_id AND country_name='France' AND
s.time_id=t.time_id AND t.calendar_year IN (2000, 2001,2002)
GROUP BY cn.country_name, prod_id, calendar_year, calendar_month_number
-- Time data used for ensuring that model has all dates
time_summary AS( SELECT DISTINCT calendar_year cal_y, calendar_month_number cal_m
FROM times
WHERE calendar_year IN (2000, 2001, 2002)
--START: main query block
SELECT c, p, y, m, s, nr FROM (
SELECT c, p, y, m, s, nr
FROM prod_sales_mo s
--Use partition outer join to make sure that each combination
--of country and product has rows for all month values
PARTITION BY (s.c, s.p)
RIGHT OUTER JOIN time_summary ts ON
(s.m = ts.cal_m
AND s.y = ts.cal_y
MODEL
REFERENCE curr_conversion ON
(SELECT country, year, month, to_us
FROM currency)
DIMENSION BY (country, year y,month m) MEASURES (to_us)
--START: main model
PARTITION BY (s.c c)
DIMENSION BY (s.p p, ts.cal_y y, ts.cal_m m)
MEASURES (s.s s, CAST(NULL AS NUMBER) nr,
s.c cc ) --country is used for currency conversion
RULES (
--first rule fills in missing data with average values
nr[ANY, ANY, ANY]
= CASE WHEN s[CV(), CV(), CV()] IS NOT NULL
THEN s[CV(), CV(), CV()]
ELSE ROUND(AVG(s)[CV(), CV(), m BETWEEN 1 AND 12],2)
END,
--second rule calculates projected values for 2002
nr[ANY, 2002, ANY] = ROUND(
((nr[CV(),2001,CV()] - nr[CV(),2000, CV()])
/ nr[CV(),2000, CV()]) * nr[CV(),2001, CV()]
+ nr[CV(),2001, CV()],2),
--third rule converts 2002 projections to US dollars
nr[ANY,y != 2002,ANY]
= ROUND(nr[CV(),CV(),CV()]
* curr_conversion.to_us[ cc[CV(),CV(),CV()], CV(y), CV(m)], 2)
ORDER BY c, p, y, m)
WHERE y = '2002'
ORDER BY c, p, y, m;I got the following error:
ORA-00947: not enough values
00947. 00000 - "not enough values"
*Cause:
*Action:
Error at Line: 39 Column: 83But when I changed the part
curr_conversion.to_us[ cc[CV(),CV(),CV()], CV(y), CV(m)], 2) of 3.rd Rules to
curr_conversion.to_us[ cc[CV(),CV(),CV()] || '', CV(y), CV(m)], 2)or
curr_conversion.to_us[ cc[CV(),CV(),CV()] || null, CV(y), CV(m)], 2)It worked!
My questions:
1/Can anyone explain me why it worked and why it didn't work?
2/Rule 3 has not the same meaning as the comment, Is it an error? Or I misunderstood anything?
the comment is: third rule converts 2002 projections to US dollars the left side has y != 2002 Thank for any help !
regards
hqt200475
Edited by: hqt200475 on Dec 20, 2012 4:45 AMHi SQL-Experts
I have a RH 5.7/Oracle 11.2-Environment!
The sample schemas are installed!
I executed as in Example 2 in Data Warehousing Guide 11G/Chapter 24:
CREATE TABLE currency (
country VARCHAR2(20),
year NUMBER,
month NUMBER,
to_us NUMBER);
INSERT INTO currency
(SELECT distinct
SUBSTR(country_name,1,20), calendar_year, calendar_month_number, 1
FROM countries
CROSS JOIN times t
WHERE calendar_year IN (2000,2001,2002)
UPDATE currency set to_us=.74 WHERE country='Canada';and then:
WITH prod_sales_mo AS --Product sales per month for one country
SELECT country_name c, prod_id p, calendar_year y,
calendar_month_number m, SUM(amount_sold) s
FROM sales s, customers c, times t, countries cn, promotions p, channels ch
WHERE s.promo_id = p.promo_id AND p.promo_total_id = 1 AND
s.channel_id = ch.channel_id AND ch.channel_total_id = 1 AND
s.cust_id=c.cust_id AND
c.country_id=cn.country_id AND country_name='France' AND
s.time_id=t.time_id AND t.calendar_year IN (2000, 2001,2002)
GROUP BY cn.country_name, prod_id, calendar_year, calendar_month_number
-- Time data used for ensuring that model has all dates
time_summary AS( SELECT DISTINCT calendar_year cal_y, calendar_month_number cal_m
FROM times
WHERE calendar_year IN (2000, 2001, 2002)
--START: main query block
SELECT c, p, y, m, s, nr FROM (
SELECT c, p, y, m, s, nr
FROM prod_sales_mo s
--Use partition outer join to make sure that each combination
--of country and product has rows for all month values
PARTITION BY (s.c, s.p)
RIGHT OUTER JOIN time_summary ts ON
(s.m = ts.cal_m
AND s.y = ts.cal_y
MODEL
REFERENCE curr_conversion ON
(SELECT country, year, month, to_us
FROM currency)
DIMENSION BY (country, year y,month m) MEASURES (to_us)
--START: main model
PARTITION BY (s.c c)
DIMENSION BY (s.p p, ts.cal_y y, ts.cal_m m)
MEASURES (s.s s, CAST(NULL AS NUMBER) nr,
s.c cc ) --country is used for currency conversion
RULES (
--first rule fills in missing data with average values
nr[ANY, ANY, ANY]
= CASE WHEN s[CV(), CV(), CV()] IS NOT NULL
THEN s[CV(), CV(), CV()]
ELSE ROUND(AVG(s)[CV(), CV(), m BETWEEN 1 AND 12],2)
END,
--second rule calculates projected values for 2002
nr[ANY, 2002, ANY] = ROUND(
((nr[CV(),2001,CV()] - nr[CV(),2000, CV()])
/ nr[CV(),2000, CV()]) * nr[CV(),2001, CV()]
+ nr[CV(),2001, CV()],2),
--third rule converts 2002 projections to US dollars
nr[ANY,y != 2002,ANY]
= ROUND(nr[CV(),CV(),CV()]
* curr_conversion.to_us[ cc[CV(),CV(),CV()], CV(y), CV(m)], 2)
ORDER BY c, p, y, m)
WHERE y = '2002'
ORDER BY c, p, y, m;I got the following error:
ORA-00947: not enough values
00947. 00000 - "not enough values"
*Cause:
*Action:
Error at Line: 39 Column: 83But when I changed the part
curr_conversion.to_us[ cc[CV(),CV(),CV()], CV(y), CV(m)], 2) of 3.rd Rules to
curr_conversion.to_us[ cc[CV(),CV(),CV()] || '', CV(y), CV(m)], 2)or
curr_conversion.to_us[ cc[CV(),CV(),CV()] || null, CV(y), CV(m)], 2)It worked!
My questions:
1/Can anyone explain me why it worked and why it didn't work?
2/Rule 3 has not the same meaning as the comment, Is it an error? Or I misunderstood anything?
the comment is: third rule converts 2002 projections to US dollars the left side has y != 2002 Thank for any help !
regards
hqt200475
Edited by: hqt200475 on Dec 20, 2012 4:45 AM -
SQL model clause not working when dimensioned on a char or a varchar2 colum
Hi ,
I tried to execute the below mentioned query and this returns me columns from monday to sunday values as null.
select weekno
, empno
, mon
, tue
, wed
, thu
, fri
, sat
, sun
from worked_hours
model
return updated rows
partition by (weekno, empno)
dimension by ( day )
measures ( hours,lpad(' ',3) mon,lpad(' ',3) tue, lpad(' ',3) wed,lpad(' ',3) thu,lpad(' ',3) fri,lpad(' ',3) sat,lpad(' ',3) sun)
RULES upsert
mon [0] = hours [1]
, tue [0] = hours [2]
, wed [0] = hours [3]
, thu [0] = hours [4]
, fri [0] = hours [5]
, sat [0] = hours [6]
, sun [0] = hours [7]
In the initial example day is a number and when executed the above query it works. The result set is as below :-
WEEKNO EMPNO MON TUE WED THU FRI SAT SUN
1 1210 8 7.5 8.5 4.5 8
1 1215 2 7.5 8 7.5 8
When the data type of day is changed to char and populated with the right values then the result set looks as below :-
WEEKNO EMPNO MON TUE WED THU FRI SAT SUN
1 1210
1 1215
Can anyone help me resolve this ?
--XXXXXuser10723455 wrote:
Hi ,
When the data type of day is changed to char and populated with the right values then the result set looks as below :- Can not reproduce on 10.2.0.4.0:
SQL> select * from v$version
2 /
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Prod
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for 32-bit Windows: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
SQL> create table worked_hours_char as select * from worked_hours where 1 = 2
2 /
Table created.
SQL> alter table worked_hours_char modify day char(10)
2 /
Table altered.
SQL> insert into worked_hours_char select * from worked_hours
2 /
14 rows created.
SQL> commit
2 /
Commit complete.
SQL> select weekno
2 , empno
3 , mon
4 , tue
5 , wed
6 , thu
7 , fri
8 , sat
9 , sun
10 from worked_hours
11 model
12 return updated rows
13 partition by (weekno, empno)
14 dimension by ( day )
15 measures ( hours,lpad(' ',3) mon,lpad(' ',3) tue, lpad(' ',3) wed,lpad(' ',3) thu,lpad(' ',3) fri,lpad(' ',3) sat,lpad(' ',3) su
n)
16 RULES upsert
17 (
18 mon [0] = hours [1]
19 , tue [0] = hours [2]
20 , wed [0] = hours [3]
21 , thu [0] = hours [4]
22 , fri [0] = hours [5]
23 , sat [0] = hours [6]
24 , sun [0] = hours [7]
25 )
26 /
WEEKNO EMPNO MON TUE WED THU FRI SAT SUN
1 1210 8 7.5 8.5 4.5 8
1 1215 2 7.5 8 7.5 8
SQL> select weekno
2 , empno
3 , mon
4 , tue
5 , wed
6 , thu
7 , fri
8 , sat
9 , sun
10 from worked_hours_char
11 model
12 return updated rows
13 partition by (weekno, empno)
14 dimension by ( day )
15 measures ( hours,lpad(' ',3) mon,lpad(' ',3) tue, lpad(' ',3) wed,lpad(' ',3) thu,lpad(' ',3) fri,lpad(' ',3) sat,lpad(' ',3) su
n)
16 RULES upsert
17 (
18 mon [0] = hours [1]
19 , tue [0] = hours [2]
20 , wed [0] = hours [3]
21 , thu [0] = hours [4]
22 , fri [0] = hours [5]
23 , sat [0] = hours [6]
24 , sun [0] = hours [7]
25 )
26 /
WEEKNO EMPNO MON TUE WED THU FRI SAT SUN
1 1210 8 7.5 8.5 4.5 8
1 1215 2 7.5 8 7.5 8
SQL> SY. -
Version
SQL> select *
2 from v$version;
BANNER
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for IBM/AIX RISC System/6000: Version 10.2.0.4.0 - Productio
NLSRTL Version 10.2.0.4.0 - ProductionMy query
with tmp AS (
select 1 as num, 'karthik' as txt from dual UNION select 2 as num, 'john' as txt from dual UNION select 3 as num, '' as txt from dual UNION select 4 as num, '' as txt from dual UNION
select 14 as num, 'tom' as txt from dual UNION select 15 as num, '' as txt from dual UNION select 26 as num, 'sam' as txt from dual UNION
select 27 as num, '' as txt from dual UNION select 28 as num, '' as txt from dual
select *
from
select num,txt,rw,'G'||dense_rank() over(order by (num-rw)) grp_id
from
select
num, txt,row_number() over(order by num) rw
from tmp
model partition by(grp_id)
dimension by(num)
measures(txt,cast(null as varchar2(4000)) as last_row_col)
rules (last_row_col[(num)] = max(txt)[num < cv()])
GRP_ID NUM TXT LAST_ROW_COL
G1 1 karthik
G1 2 john karthik
G1 3 karthik
G1 4 karthik
G3 26 sam
G3 27 sam
G3 28 sam
G2 14 tom
G2 15 tomDesired Output :
GRP_ID NUM TXT LAST_ROW_COL
G1 1 karthik karthik
G1 2 john
G1 3
G1 4 john
G3 26 sam
G3 27
G3 28 sam
G2 14 tom
G2 15 tomi.e.within a group (GRP_ID) the column LAST_ROW_COL must hold the most recent(order by num desc) not null value to be displayed at the last row in that particular group.
So,it should be 'john' for the rest of null values in group G1(karthik will remain as it is for num = 1) which should be displayed at the ending row of that particular group.
Thanks in advance.
Edited by: RGH on Jan 2, 2012 4:18 AMRGH wrote:
My queryAnd why do you want to use MODEL for that? All you need is analytic functions:
with tmp AS (
select 1 as num, 'karthik' as txt from dual UNION ALL
select 2 as num, 'john' as txt from dual UNION ALL
select 3 as num, '' as txt from dual UNION ALL
select 4 as num, '' as txt from dual UNION ALL
select 14 as num, 'tom' as txt from dual UNION ALL
select 15 as num, '' as txt from dual UNION ALL
select 26 as num, 'sam' as txt from dual UNION ALL
select 27 as num, '' as txt from dual UNION ALL
select 28 as num, '' as txt from dual
select 'G' || dense_rank() over(order by num - rw) grp_id,
num,
txt,
last_row_col
from (
select num,
txt,
case
when lead(txt) over(order by num) is not null then last_value(txt ignore nulls) over(order by num)
when num = max(num) over() then last_value(txt ignore nulls) over(order by num)
end last_row_col,
row_number() over(order by num) rw
from tmp
GRP_ID NUM TXT LAST_RO
G1 1 karthik karthik
G1 2 john
G1 3
G1 4 john
G2 14 tom
G2 15 tom
G3 26 sam
G3 27
G3 28 sam
9 rows selected.
SQL> SY.
Maybe you are looking for
-
Right now my Ipod touch is in the charging / recharge screen and it won't leave that screen even when I unplug it. And it don't seem to be charging the red part on the battery indicator is flashing red and not moving into the green at all I have trie
-
Best practice to create multi tier (atleast 3 level) table
What is the best practice to create multi tier (minimum 3 levels) of table?. Could any one provide a sample structure?. Thanks.
-
Hi Experts, We have tried to activate open item management for a particular account which was not open item managed orginally. When we attempt to do the same we are unable to do the same. We have made the balance to Zero for the account. But our comp
-
Block next calls to integration process
Hi, I have a BPM which makes 2 succesive calls to web service (synchronous). If there is a problem in the calls to any of the 2 web services, I want to raise an error and stop all other calls to this BPM to go through. Right now, if I make 1 call and
-
Hi mates, Is the name of sender CIDX adapter constructed at runtime according to the template <Partner Role>_<direction>_<Action> or is it just the naming convention for the adapter? I understand from <a href="http://help.sap.com/saphelp_nw04/hel