Multi table inheritance and performance

I really like the idea of multi-table inheritance, since a have a main
class and three subclasses which just add one integer to the main class.
It would be a waste to spend 4 tables on this, so I decided to put them
all into one.
My problem now is, that when I query for a specific class, kodo will build
SQL like:
select ... from table where
JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
this is pretty slow, when the table grows because string comparisons are
awefull - and even worse: the database has to compare nearly the whole
string because it differs only in the last letters.
indexing would help a bit but wouldn't outperforming integer comparisons.
Is it possible to get kodo to do one more step of normalization ?
Having an extra table containing all classnames und id's for them (and
references in the original table) would improve performance of
multi-tables quite a lot !
Even with standard classes it would save a lot memory not having the full
classname in each row.

Stefan-
Thanks for the feedback. Note that 3.0 does make this simpler: we have
extensions that allow you to define the mechanism for subclass
identification purely in the metadata file(s). See:
http://solarmetric.com/Software/Documentation/3.0.0RC1/docs/manual.html#ref_guide_mapping_classind
The idea for having a separate table mapping numbers to class names is
good, but we prefer to have as few Kodo-managed tables as possible. It
is just as easy to do this in the metadata file.
In article <[email protected]>, Stefan wrote:
First of all: thx for the fast help, this one (IntegerProvider) helped and
solves my problem.
kodo is really amazing with all it's places where customization can be
done !
Anyway as a wish for future releases: exactly this technique - using
integer as class-identifiers rather than the full class names is what I
meant with "normalization".
The only thing missing, is a table containing information of how classIDs
are mapped to classnames (which is now contained as an explicit statement
in the jdo-File). This table is not mapped to the primary key of the main
table (as you suggested), but to the classID-Integer wich acts as a
foreign key.
A query for a specific class would be solved with a query like:
select * from classValues, classMapping where
classValues.JDOCLASSX=classmapping.IDX and
classmapping.CLASSNAMEX='de.company.whatever'
This table should be managed by kodo of course !
Imagine a table with 300.000 rows containing only 3 different derived
classes.
You would have an extra table with 4 rows (base class + 3 derived types).
Searching for the classID is done in that 4row table, while searching the
actual class instances than would be done over an indexed integer-classID
field.
This is much faster than having the database doing 300.000 String
comparisons (even when indexed).
(By the way - it would save a lot memory as well, even on classes which
are not derived)
If this technique is done by kodo transparently, maybe turned on with an
extra option ... that would be great, since you wouldn't need to take care
for different "subclass-indicator-values", can go on as everytime and have
a far better performance ...
Stephen Kim wrote:
You could push off fields to seperate tables (as long as the pk column
is the same), however, I doubt that would add much performance benefit
in this case, since we'd simply add a join (e.g. select data.name,
info.jdoclassx, info.jdoidx where data.jdoidx = info.jdoidx where
info.jdoclassx = 'foo'). One could turn off default fetch group for
fields stored in data, but now you're adding a second select to load one
"row" of data.
However, we DO provide an integer subclass provider which can speed
these sorts of queries a lot if you need to constrain your queries by
class, esp. with indexing, at the expense of simple legibility:
http://solarmetric.com/Software/Documentation/2.5.3/docs/ref_guide_meta_class.html#meta-class-subclass-provider
Stefan wrote:
I really like the idea of multi-table inheritance, since a have a main
class and three subclasses which just add one integer to the main class.
It would be a waste to spend 4 tables on this, so I decided to put them
all into one.
My problem now is, that when I query for a specific class, kodo will build
SQL like:
select ... from table where
JDOCLASSX='de.mycompany.myprojectname.mysubpack.classname'
this is pretty slow, when the table grows because string comparisons are
awefull - and even worse: the database has to compare nearly the whole
string because it differs only in the last letters.
indexing would help a bit but wouldn't outperforming integer comparisons.
Is it possible to get kodo to do one more step of normalization ?
Having an extra table containing all classnames und id's for them (and
references in the original table) would improve performance of
multi-tables quite a lot !
Even with standard classes it would save a lot memory not having the full
classname in each row.
Steve Kim
[email protected]
SolarMetric Inc.
http://www.solarmetric.com
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com

Similar Messages

  • Issue with trigger, multi-table insert and error logging

    I find that if I try to perform a multi-table insert with error logging on a table that has a trigger, then some constraint violations result in an exception being raised as well as logged:
    <pre>
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE     11.2.0.1.0     Production
    TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production
    SQL> create table t1 (id integer primary key);
    Table created.
    SQL> create table t2 (id integer primary key, t1_id integer,
    2           constraint t2_t1_fk foreign key (t1_id) references t1);
    Table created.
    SQL> exec dbms_errlog.create_error_log ('T2');
    PL/SQL procedure successfully completed.
    SQL> insert all
    2 into t2 (id, t1_id)
    3 values (x, y)
    4 log errors into err$_t2 reject limit unlimited
    5 select 1 x, 2 y from dual;
    0 rows created.
    SQL> create or replace trigger t2_trg
    2 before insert or update on t2
    3 for each row
    4 begin
    5 null;
    6 end;
    7 /
    Trigger created.
    SQL> insert all
    2 into t2 (id, t1_id)
    3 values (x, y)
    4 log errors into err$_t2 reject limit unlimited
    5 select 1 x, 2 y from dual;
    insert all
    ERROR at line 1:
    ORA-02291: integrity constraint (EOR.T2_T1_FK) violated - parent key not found
    </code>
    This doesn't appear to be a documented restriction. Does anyone know if it is a bug?

    Tony Andrews wrote:
    This doesn't appear to be a documented restriction. Does anyone know if it is a bug?Check The Execution Model for Triggers and Integrity Constraint Checking.
    SY.

  • Multi table insert and ....tricky problem

    I have three account related table
    --staging table where all raw records exist
    acc_event_stage
    (acc_id NUMBER(22) NOT NULL,
    event_code varchar2(50) NOT NULL,
    event_date date not null);
    --production table where valid record are moved from staging table
    acc_event
    (acc_id NUMBER(22) NOT NULL,
    event_code varchar2(50) NOT NULL,
    event_date date not null);
    --error records from staging are moved in here
    err_file
    (acc_id NUMBER(22) NOT NULL,
    error_code varchar2(50) NOT NULL);
    --summary records from production account table
    acc_event_summary
    (acc_id NUMBER(22) NOT NULL,
    event_date date NOT NULL,
    instance_flag char(1) not null);
    records in the staging table may look like this(I have put it in simple english for ease)
    open account A on June 12 8 am
    close account A on June 12 9 am
    open account A on June 12 11 am
    Rules
    Account cannot be closed if an open account doesnt exist
    account cannot be updated if an open account doesnt exist
    account cannot be opened if an open account already exist.
    Since I am using
    Insert all
    when open account record and account already exist
    into ...error table
    when close account record and open account doesnt exist
    into ...error table
    else
    into account_event table.
    select * from acc_event_stage order by ..
    wondering if the staging table has records like this
    open account A on June 12 8 am
    close account A on June 12 9 am
    open account A on June 12 11 am
    then how can I validate some of the records given the rule above.I can do the validation from existing records (before the staging table data arrived) without any problem.But the tricky part is the new records in the staging table.
    This can be easily achieved if I do cursor loop method,but doing a multi table insert looks like a problem
    any opinion.?
    thx
    m

    In short,in simple example,
    through multi table insert if I insert record in production account event table from
    staging account event table,making sure that the record doesnt exist in the production table.This will bomb if 2 open account exist in the current staging table.
    It will also bomb if an open account and then close account exist.
    etc.

  • Strange issue with Multi-Table LTS and report filters (10g)

    Hi,
    I am having some troubles with the following scenario:
    I have a large flattened table that is being used as both Fact and Dimension in the BMM layer. I also have another table that contains some supplementary measures needed by some reports.
    Logical Fact table has an LTS that consists of the 2 tables which are joined on 2 columns (eg, a.col1 = b.col1 and a.col2 = b.col2, etc). The columns used in the join also exist in the logical dimension which uses the same flattened table. I use these columns in my report filters.
    I have created some Fact measures, some from each source inside the LTS.
    What's happening is that when I am using this model in Answers, the query being generated is applying my filtering conditions to both tables, which seems unnecessary to me since they are joined on these columns.
    i.e.,
    select *
    from table1 a, table2 b
    where a.col1 = b.col1
    and a.col2 = b.col2
    and a.col1 = 'Value 1'
    and a.col2 = 'Value 2'
    and b.col1 = 'Value 1'
    and b.col1 = 'Value 2'
    The last two lines in the query above should not be happening. I have not mapped these columns anywhere in the BMM to table2 (other than physical layer join), only to table1.
    For clarification, it's not hurting the end results from what I've seen, but it is adding additional filtering to the query that can increase overall cost, which I'm trying to avoid.
    Any ideas/suggestions?
    Thanks
    Edited by: odinsride on Jan 31, 2012 5:03 PM

    Hi,
    I think this is a known issue. I think that the BI Server pulls in both the tables involved in the LTS for any query (Yes, even if the query has only one column from any of the tables in LTS) and so are the filters.
    In this case, what I would suggest you is to create 2 LTS (probably 3). One for each physical table (that are being joined in the LTS) and one for both of them joined (may be inner/left/right) etc.
    With this kind of setup, when there are reports for any particular column of this logical table, the BI Server can go and choose the corresponding LTS only. However, if there is a report with columns from both the tables, the BI Server would choose the LTS with the join.
    Hope this helps.
    Thank you,
    Dhar

  • DataStoreException for multi-table inheritence using numeric indicator value

    I am using multi-table inheritance with Kodo 2.5.5
    Relevant kodo.properties are:
    com.solarmetric.kodo.PersistentTypes=com.letsys.erespond.business.model.netw
    ork.PowerTransformer,com.letsys.erespond.business.model.network.Device,com.l
    etsys.erespond.business.model.network.ElectricalDevice,com.letsys.erespond.b
    usiness.model.network.Switch
    javax.jdo.PersistenceManagerFactoryClass=com.solarmetric.kodo.impl.jdbc.JDBC
    PersistenceManagerFactory
    com.solarmetric.kodo.impl.jdbc.DefaultSubclassProviderClass=com.solarmetric.
    kodo.impl.jdbc.ormapping.IntegerSubclassProvider
    I have given each of my PC classes s a numeric subclass indicator value (to
    improve join performance)
    e.g.
    <extension vendor-name="kodo" key="subclass-indicator-value" value="1">
    </extension>
    Here is my query code, the exception is at the end of the email:
    Extent e = pm.getExtent(Device.class, true);
    Query q = pm.newQuery(e, "active == p1");
    q.declareParameters("int p1");
    Collection c = (Collection)q.execute(new Integer(1));
    Iterator iter = c.iterator();
    while (iter.hasNext()) {
    Device d = (Device)iter.next();
    System.out.println("name="+d.getName());
    NOTE: The above code works if I first do a similar query, but specify the
    subclass in the pm.getExtent()
    It seems to me that the persistent classes listed in kodo.properties are not
    getting loaded.
    I also get this from debug on startup
    421 DEBUG [TestRunner-Thread] kodo.Runtime -
    [email protected]067:
    registering 1 classes: [class
    com.letsys.erespond.business.model.network.Device]
    421 DEBUG [TestRunner-Thread] kodo.MetaData - found JDO resource
    Device.jdo for com.letsys.erespond.business.model.network.Device at
    file:/C:/Dev/projectm/classes/com/letsys/erespond/business/model/network/Device.jdo
    421 INFO [TestRunner-Thread] kodo.MetaData -
    com.solarmetric.kodo.meta.JDOMetaDataParser@d12eea: parsing source:
    file:/C:/Dev/projectm/classes/com/letsys/erespond/business/model/network/Device.jdo
    I would have expected it to register all my PC classes at this stage.
    Any ideas ???
    Regards,
    Chris.
    [junit] Testcase: testInheritenceQuery took 9.747 sec
    [junit] Caused an ERROR
    [junit] com.solarmetric.kodo.runtime.DataStoreException: Type "null",
    the type registered for subclass indicatorvalue "3", is re
    ferenced in the database, but does not exist.
    [junit] NestedThrowables:
    [junit] com.solarmetric.kodo.runtime.DataStoreException: Type "3" is
    referenced in the database, but does not exist.
    [junit] com.solarmetric.kodo.runtime.FatalDataStoreException:
    com.solarmetric.kodo.runtime.DataStoreException: Type "null", the
    type registered for subclass indicatorvalue "3", is referenced in the
    database, but does not exist.
    [junit] NestedThrowables:
    [junit] com.solarmetric.kodo.runtime.DataStoreException: Type "3" is
    referenced in the database, but does not exist.
    [junit] NestedThrowables:
    [junit] com.solarmetric.kodo.runtime.DataStoreException: Type "null",
    the type registered for subclass indicatorvalue "3", is re
    ferenced in the database, but does not exist.
    [junit] NestedThrowables:
    [junit] com.solarmetric.kodo.runtime.DataStoreException: Type "3" is
    referenced in the database, but does not exist.
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.LazyResultList.instantiateRow(LazyRes
    ultList.java:217)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.LazyResultList.get(LazyResultList.jav
    a:142)
    [junit] at java.util.AbstractList$Itr.next(AbstractList.java:416)
    [junit] at
    com.solarmetric.kodo.runtime.objectprovider.ResultListIterator.next(ResultLi
    stIterator.java:49)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.ResultListFactory.createResultList(Re
    sultListFactory.java:82)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.executeQuery(JDBCSto
    reManager.java:1138)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.query.JDBCQuery.executeQuery(JDBCQuery.java:1
    26)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl$DatastoreQueryExecutor.executeQuery(Que
    ryImpl.java:1565)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.executeQueryWithMap(QueryImpl.java:685)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.executeWithMap(QueryImpl.java:545)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.executeWithArray(QueryImpl.java:531)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.execute(QueryImpl.java:501)
    [junit] at
    com.letsys.erespond.business.model.network.DeviceAppTest.testInheritenceQuer
    y(DeviceAppTest.java:32)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    [junit] at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    [junit] at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    [junit] NestedThrowablesStackTrace:
    [junit] com.solarmetric.kodo.runtime.DataStoreException: Type "null",
    the type registered for subclass indicatorvalue "3", is re
    ferenced in the database, but does not exist.
    [junit] NestedThrowables:
    [junit] com.solarmetric.kodo.runtime.DataStoreException: Type "3" is
    referenced in the database, but does not exist.
    [junit] at
    com.solarmetric.kodo.impl.jdbc.ormapping.IntegerSubclassProvider.getType(Int
    egerSubclassProvider.java:214)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.ormapping.SubclassProviderImpl.getType(Subcla
    ssProviderImpl.java:97)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.ormapping.ClassMapping.loadPrimaryMappings(Cl
    assMapping.java:1060)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.initialize(JDBCStore
    Manager.java:374)
    [junit] at
    com.solarmetric.kodo.runtime.StateManagerImpl.loadInitialState(StateManagerI
    mpl.java:215)
    [junit] at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectByIdFilter(Pers
    istenceManagerImpl.java:1278)
    [junit] at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectById(Persistenc
    eManagerImpl.java:1196)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.createFromResultSet(
    JDBCStoreManager.java:967)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager$2.getResultObject(JD
    BCStoreManager.java:1146)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.LazyResultList.instantiateRow(LazyRes
    ultList.java:199)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.LazyResultList.get(LazyResultList.jav
    a:142)
    [junit] at java.util.AbstractList$Itr.next(AbstractList.java:416)
    [junit] at
    com.solarmetric.kodo.runtime.objectprovider.ResultListIterator.next(ResultLi
    stIterator.java:49)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.ResultListFactory.createResultList(Re
    sultListFactory.java:82)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.executeQuery(JDBCSto
    reManager.java:1138)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.query.JDBCQuery.executeQuery(JDBCQuery.java:1
    26)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl$DatastoreQueryExecutor.executeQuery(Que
    ryImpl.java:1565)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.executeQueryWithMap(QueryImpl.java:685)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.executeWithMap(QueryImpl.java:545)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.executeWithArray(QueryImpl.java:531)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.execute(QueryImpl.java:501)
    [junit] at
    com.letsys.erespond.business.model.network.DeviceAppTest.testInheritenceQuer
    y(DeviceAppTest.java:32)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    [junit] at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    [junit] at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    [junit] NestedThrowablesStackTrace:
    [junit] com.solarmetric.kodo.runtime.DataStoreException: Type "3" is
    referenced in the database, but does not exist.
    [junit] at
    com.solarmetric.kodo.impl.jdbc.ormapping.IntegerSubclassProvider.getType(Int
    egerSubclassProvider.java:207)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.ormapping.SubclassProviderImpl.getType(Subcla
    ssProviderImpl.java:97)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.ormapping.ClassMapping.loadPrimaryMappings(Cl
    assMapping.java:1060)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.initialize(JDBCStore
    Manager.java:374)
    [junit] at
    com.solarmetric.kodo.runtime.StateManagerImpl.loadInitialState(StateManagerI
    mpl.java:215)
    [junit] at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectByIdFilter(Pers
    istenceManagerImpl.java:1278)
    [junit] at
    com.solarmetric.kodo.runtime.PersistenceManagerImpl.getObjectById(Persistenc
    eManagerImpl.java:1196)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.createFromResultSet(
    JDBCStoreManager.java:967)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager$2.getResultObject(JD
    BCStoreManager.java:1146)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.LazyResultList.instantiateRow(LazyRes
    ultList.java:199)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.LazyResultList.get(LazyResultList.jav
    a:142)
    [junit] at java.util.AbstractList$Itr.next(AbstractList.java:416)
    [junit] at
    com.solarmetric.kodo.runtime.objectprovider.ResultListIterator.next(ResultLi
    stIterator.java:49)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.ResultListFactory.createResultList(Re
    sultListFactory.java:82)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.runtime.JDBCStoreManager.executeQuery(JDBCSto
    reManager.java:1138)
    [junit] at
    com.solarmetric.kodo.impl.jdbc.query.JDBCQuery.executeQuery(JDBCQuery.java:1
    26)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl$DatastoreQueryExecutor.executeQuery(Que
    ryImpl.java:1565)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.executeQueryWithMap(QueryImpl.java:685)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.executeWithMap(QueryImpl.java:545)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.executeWithArray(QueryImpl.java:531)
    [junit] at
    com.solarmetric.kodo.query.QueryImpl.execute(QueryImpl.java:501)
    [junit] at
    com.letsys.erespond.business.model.network.DeviceAppTest.testInheritenceQuer
    y(DeviceAppTest.java:32)
    [junit] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    [junit] at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    [junit] at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    ..java:25)
    [junit] Testcase: testInheritenceQuery
    [junit] TEST com.letsys.erespond.business.model.network.DeviceAppTest
    FAILED

    Marc,
    All of my PC classes are in the result of the api call you suggested:
    [junit] Kodo type[0] =
    com.letsys.erespond.business.model.network.PowerTransformer
    [junit] Kodo type[1] = com.letsys.erespond.business.model.network.Device
    [junit] Kodo type[2] =
    com.letsys.erespond.business.model.network.ElectricalDevice
    [junit] Kodo type[3] = com.letsys.erespond.business.model.network.Switch
    However, when doing the query, it only seems to read the parent jdo file
    [junit] (kodo.MetaData 89 )
    com.solarmetric.kodo.meta.JDOMetaDataParser@16477d9: parsing source:
    file:C:/Dev/projectm/classes/com/letsys/erespond/business/model/network/Device.jdo
    When I do a Class.forName() on each of the classes, I see that it loads in
    all the .jdo files.
    Chris.
    "Marc Prud'hommeaux" <[email protected]> wrote in message
    news:[email protected]..
    Chris-
    Strange. It does seem like your PersistentTypes property is not being
    used at all.
    Can you try casting your pmf to a JDBCPersistenceManagerFactory and
    seeing if your types have actually been registered with:
    String[] types = pmf.getConfiguration ().getPersistentTypeNames ();
    In article <bop53p$a5u$[email protected]>, Chris McCarthy wrote:
    Hi Patrick,
    Just tried it and yes, doing a Class.forName() on the classes I insert
    does provide a work-around.
    Any idea what is causing this ?
    Regards,
    Chris.
    "Patrick Linskey" <[email protected]> wrote in message
    news:boc31n$7do$[email protected]..
    Chris,
    What happens if you do a Class.forName() (or just a class reference) to
    each of your persistent classes before executing any JDO code?
    -Patrick
    Marc Prud'hommeaux [email protected]
    SolarMetric Inc. http://www.solarmetric.com

  • Any general tips on getting better performance out of multi table insert?

    I have been struggling with coding a multi table insert which is the first time I ever use one and my Oracle skills are pretty poor in general so now that the query is built and works fine I am sad to see its quite slow.
    I have checked numerous articles on optimizing but the things I try dont seem to get me much better performance.
    First let me describe my scenario to see if you agree that my performance is slow...
    its an insert all command, which ends up inserting into 5 separate tables, conditionally (at least 4 inserts, sometimes 5 but the fifth is the smallest table). Some stats on these tables as follows:
    Source table: 5.3M rows, ~150 columns wide. Parallel degree 4. everything else default.
    Target table 1: 0 rows, 27 columns wide. Parallel 4. everything else default.
    Target table 2: 0 rows, 63 columns wide. Parallel 4. default.
    Target table 3: 0 rows, 33 columns wide. Parallel 4. default.
    Target table 4: 0 rows, 9 columns wide. Parallel 4. default.
    Target table 5: 0 rows, 13 columns wide. Parallel 4. default.
    The parallelism is just about the only customization I myself have done. Why 4? I dont know it's pretty arbitrary to be honest.
    Indexes?
    Table 1 has 3 index + PK.
    Table 2 has 0 index + FK + PK.
    Table 3 has 4 index + FK + PK
    Table 4 has 3 index + FK + PK
    Table 5 has 4 index + FK + PK
    None of the indexes are anything crazy, maybe 3 or 4 of all of them are on multiple columns, 2-3 max. The rest are on single columns.
    The query itself looks something like this:
    insert /*+ append */ all
    when 1=1 then
    into table1 (...) values (...)
    into table2 (...) values (...)
    when a=b then
    into table3 (...) values (...)
    when a=c then
    into table3 (...) values (...)
    when p=q then
    into table4(...) values (...)
    when x=y then
    into table5(...) values (...)
    select .... from source_table
    Hints I tried are with append, without append, and parallel (though adding parallel seemed to make the query behave in serial, according to my session browser).
    Now for the performance:
    It does about 8,000 rows per minute on table1. So that means it should also have that much in table2, table3 and table4, and then a subset of that in table5.
    Does that seem normal or am I expecting too much?
    I find articles talking about millions of rows per minute... Obviously i dont think I can achieve that much... but maybe 30k or so on each table is a reasonable goal?
    If it seems my performance is slow, what else do you think I should try? Is there any information I may try to get to see if maybe its a poorly configured database for this?
    P.S. Is it possible I can run this so that it commits every x rows or something? I had the heartbreaking event of a network issue giving me this sudden "ora-25402: transaction must roll back" after it was running for 3.5 hours. So I lost all the progress it made... have to start over. plus i wonder if the sheer amount of data being queued for commit/rollback is causing some of the problem?
    Edited by: trant on Jun 27, 2011 9:29 PM

    Looks like there are about 54 sessions on my database, 7 of the sessions belong to me (2 taken by TOAD and 4 by my parallel slave sessions and 1 by the master of those 4)
    In v$session_event there are 546 rows, if i filter it to the SIDs of my current session and order my micro_wait_time desc:
    510     events in waitclass Other     30670     9161     329759     10.75     196     3297590639     1736664284     1893977003     0     Other
    512     events in waitclass Other     32428     10920     329728     10.17     196     3297276553     1736664284     1893977003     0     Other
    243     events in waitclass Other     21513     5     329594     15.32     196     3295935977     1736664284     1893977003     0     Other
    223     events in waitclass Other     21570     52     329590     15.28     196     3295898897     1736664284     1893977003     0     Other
    241     row cache lock     1273669     0     42137     0.03     267     421374408     1714089451     3875070507     4     Concurrency
    241     events in waitclass Other     614793     0     34266     0.06     12     342660764     1736664284     1893977003     0     Other
    241     db file sequential read     13323     0     3948     0.3     13     39475015     2652584166     1740759767     8     User I/O
    241     SQL*Net message from client     7     0     1608     229.65     1566     16075283     1421975091     2723168908     6     Idle
    241     log file switch completion     83     0     459     5.54     73     4594763     3834950329     3290255840     2     Configuration
    241     gc current grant 2-way     5023     0     159     0.03     0     1591377     2685450749     3871361733     11     Cluster
    241     os thread startup     4     0     55     13.82     26     552895     86156091     3875070507     4     Concurrency
    241     enq: HW - contention     574     0     38     0.07     0     378395     1645217925     3290255840     2     Configuration
    512     PX Deq: Execution Msg     3     0     28     9.45     28     283374     98582416     2723168908     6     Idle
    243     PX Deq: Execution Msg     3     0     27     9.1     27     272983     98582416     2723168908     6     Idle
    223     PX Deq: Execution Msg     3     0     25     8.26     24     247673     98582416     2723168908     6     Idle
    510     PX Deq: Execution Msg     3     0     24     7.86     23     235777     98582416     2723168908     6     Idle
    243     PX Deq Credit: need buffer     1     0     17     17.2     17     171964     2267953574     2723168908     6     Idle
    223     PX Deq Credit: need buffer     1     0     16     15.92     16     159230     2267953574     2723168908     6     Idle
    512     PX Deq Credit: need buffer     1     0     16     15.84     16     158420     2267953574     2723168908     6     Idle
    510     direct path read     360     0     15     0.04     4     153411     3926164927     1740759767     8     User I/O
    243     direct path read     352     0     13     0.04     6     134188     3926164927     1740759767     8     User I/O
    223     direct path read     359     0     13     0.04     5     129859     3926164927     1740759767     8     User I/O
    241     PX Deq: Execute Reply     6     0     13     2.12     10     127246     2599037852     2723168908     6     Idle
    510     PX Deq Credit: need buffer     1     0     12     12.28     12     122777     2267953574     2723168908     6     Idle
    512     direct path read     351     0     12     0.03     5     121579     3926164927     1740759767     8     User I/O
    241     PX Deq: Parse Reply     7     0     9     1.28     6     89348     4255662421     2723168908     6     Idle
    241     SQL*Net break/reset to client     2     0     6     2.91     6     58253     1963888671     4217450380     1     Application
    241     log file sync     1     0     5     5.14     5     51417     1328744198     3386400367     5     Commit
    510     cursor: pin S wait on X     3     2     2     0.83     1     24922     1729366244     3875070507     4     Concurrency
    512     cursor: pin S wait on X     2     2     2     1.07     1     21407     1729366244     3875070507     4     Concurrency
    243     cursor: pin S wait on X     2     2     2     1.06     1     21251     1729366244     3875070507     4     Concurrency
    241     library cache lock     29     0     1     0.05     0     13228     916468430     3875070507     4     Concurrency
    241     PX Deq: Join ACK     4     0     0     0.07     0     2789     4205438796     2723168908     6     Idle
    241     SQL*Net more data from client     6     0     0     0.04     0     2474     3530226808     2000153315     7     Network
    241     gc current block 2-way     5     0     0     0.04     0     2090     111015833     3871361733     11     Cluster
    241     enq: KO - fast object checkpoint     4     0     0     0.04     0     1735     4205197519     4217450380     1     Application
    241     gc current grant busy     4     0     0     0.03     0     1337     2277737081     3871361733     11     Cluster
    241     gc cr block 2-way     1     0     0     0.06     0     586     737661873     3871361733     11     Cluster
    223     db file sequential read     1     0     0     0.05     0     461     2652584166     1740759767     8     User I/O
    223     gc current block 2-way     1     0     0     0.05     0     452     111015833     3871361733     11     Cluster
    241     latch: row cache objects     2     0     0     0.02     0     434     1117386924     3875070507     4     Concurrency
    241     enq: TM - contention     1     0     0     0.04     0     379     668627480     4217450380     1     Application
    512     PX Deq: Msg Fragment     4     0     0     0.01     0     269     77145095     2723168908     6     Idle
    241     latch: library cache     3     0     0     0.01     0     243     589947255     3875070507     4     Concurrency
    510     PX Deq: Msg Fragment     3     0     0     0.01     0     215     77145095     2723168908     6     Idle
    223     PX Deq: Msg Fragment     4     0     0     0     0     145     77145095     2723168908     6     Idle
    241     buffer busy waits     1     0     0     0.01     0     142     2161531084     3875070507     4     Concurrency
    243     PX Deq: Msg Fragment     2     0     0     0     0     84     77145095     2723168908     6     Idle
    241     latch: cache buffers chains     4     0     0     0     0     73     2779959231     3875070507     4     Concurrency
    241     SQL*Net message to client     7     0     0     0     0     51     2067390145     2000153315     7     Network
    (yikes, is there a way to wrap that in equivalent of other forums' tag?)
    v$session_wait;
    223     835     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     10     WAITING
    241     22819     row cache lock     cache id     13     000000000000000D     mode     0     00     request     5     0000000000000005     3875070507     4     Concurrency     -1     0     WAITED SHORT TIME
    243     747     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     7     WAITING
    510     10729     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     2     WAITING
    512     12718     PX Deq Credit: send blkd     sleeptime/senderid     268697599     000000001003FFFF     passes     1     0000000000000001     qref     0     00     1893977003     0     Other     0     4     WAITING
    v$sess_io:
    223     0     5779     5741     0     0
    241     38773810     2544298     15107     27274891     0
    243     0     5702     5688     0     0
    510     0     5729     5724     0     0
    512     0     5682     5678     0     0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Help with multi-table mapping for one-to-many object inheritance

    Hi,
    I have posted on here before regarding this (Toplink mapping for one-to-many object inheritance but I am still having problems mapping my object model to my schema.
    Object model
    The Person and Organisation objects contain base information and have the primary keys person_id and organisation_id. It is important that there is no duplication of person and organisation records, no matter how many times they are saved in different roles.
    There are two types of licenceholder in the problem domain, and the ILicenceHolder interface defines information and methods that are common to both. The PersonalLicenceHolder object represents one of these types of licenceholder, and is always a person, so this class extends Person and implements ILicenceHolder.
    The additional information and methods that are required by the second type of licenceholder are defined in the interface IPremisesLicenceHolder, which extends ILicenceHolder. Premises licence holders can either be people or organisations, so I have two objects to represent these - PremisesLicenceHolderPerson which implements IPremisesLicenceHolder and extends Person, and PremisesLicenceHolderOrganisation which implements IPremisesLicenceHolder and extends Organisation.
    The model is further complicated by the fact that any single Person may be both a PersonalLicenceHolder and a PremisesLicenceHolderPerson, and may be so several times over. In this case, the same basic Person information needs to be linked to several different sets of licenceholder information. In the same way, any single Organisation may be a PremisesLicenceHolderOrganisation several times over.
    Sorry this is complicated!
    Schema
    I have Person and Organisation tables containing the basic information with the primary keys person_id and organisation_id.
    I have tried to follow Donald Smith's advice and have created a Role table to record the specialised information for the different types of licence holder. I want the foreign keys in this table to be licenceholder_id and licence_id. Licenceholder_id will reference either organisation_id or person_id, and licence_id will reference the primary key of the Licence table to link the licenceholder to the licence. Because I am struggling with the mapping, I have changed licenceholder_id to person_id in an attempt to get it working with the Person object before I try the Organisation.
    Then, when a new licenceholder is added, if the person/organisation is already in the database, a new record is created in the Role table linking the existing person/organisation to the existing licence rather than duplicating the person/organisation information.
    Mapping
    I am trying to use the toplink mapping workbench to map my PremisesLicenceHolderPerson object to my schema. I have mapped all inherited attributes to superclass (Person). The primary table that the attributes are mapped to is Person, and I have used the multi-table info tab to add Roles as an additional table and map the remaining attributes to that.
    I have created the references PERSON_ROLES which maps person.person_id to roles.person_id, ROLES_PERSON which maps roles.person_id to person.person_id and ROLES_LICENCE which maps roles.licence_id to licence.licence_id.
    I think I have put in all the relationships, but I cannot get rid of the error message "The following primary key fields are unmapped: PERSON_ID".
    Please can somebody tell me how to map this properly?
    Thank you.

    I'm not positive about your mappings, but it looks like the Person object should really have a 1:M or M:M mapping to the Licenceholder table. This then means that your object model should be similar, in that Person object could have many Licenses, instead of being LicenceHolders. From the looks of it, you have it set up from the LicenceHolder perspective. What could be done instead if a LicenceHolder could have a 1:1 reference to a person data object, rather than actually be a Person. This would allow the person data to be easily shared among licences.
    LicenceHolder1 has an entry in the LicenceHolder table and Person table. LicenceHolder2 also has entries in these tables, but uses the same entry in the Person table- essentially it is the same person/person_ID. If both are new objects, TopLink would try to insert the same person object into the Person table twice. I'm not sure how you have gotten around or are planning to get around this problem.
    Since you are using inheritance, it means that LicenceHolder needs a writable mapping to the person.person_id field- most commonly done through a direct to field mapping. From the description, it looks like roles.person_id is a foreign key in the multiple table mapping, meaning it would be set based on the value in the person.person_id field, but the person.person_id isn't actually mapped in the object. Check to make sure that the ID attribute LicenceHolder is inheriting from person hasn't been remapped in the LicenceHolder descriptor to a different field.
    Best Regards,
    Chris

  • Problems with ReportDocument and multi-table dataset...

    Hi,
    First post here and hoping someone can help with me with this problem.
    I have an ASP.NET app running using CR 2008 Basic to produce invoices - these are fairly complex and use multiple tables - data used is provided in lists and they work fine when displayed/printed/exported from the viewer.
    My client now wants the ability to batch print these so I'm trying to develop a page which will use create a ReportDocument object and then print/export as required.
    As soon as I started down this path I received the 'invalid logon params problem' - so to simplify things and to get past this I've developed a simple 1 page report with which takes data from a dataset I've populated and passed to it.
    Here's the problem:
    1) If I put one table in the dataset and add a field from that table to the report it works OK.
    2) If I use a second table and field (without the first) it works OK.
    3) as soon as I have both tables and the report has field from each I get the 'Invalid logon parameters' error.
    The tables are fine (since I can use each individually), the report is find (only has 1 or 2 fields and works with individual tables) ... it's only when I have both tables that I get problems...
    ... this is driving me up the wall and if CR can't handle this there's no way it's going to handle the more complex invoices with subreports.
    Can anyone suggest what I'm doing wrong ... or tell me whether I'm just pushing CR beyond it's capabilities?
    The code I'm using to generate the ReportDocument is:
    List<Invoice> rinv = Invoice.SelectOneContract(inv.Invoice_ID);
    List<InvoiceLine> rline = InvoiceLine.SelectInvoice(inv.Invoice_ID);
            DataSet ds = new DataSet();
            ds.Tables.Add(InvoiceLineTable(rline));
            ds.Tables.Add(InvoiceTable(rinv));
            rdoc.FileName = Server.MapPath("~/Invoicing/test.rpt");
            rdoc.SetDataSource(ds.Tables);
            rdoc.ExportToDisk(ExportFormatType.PortableDocFormat, "c:
    test
    test.pdf");
    ... so not rocket science and the error is always caused at the 'ExportToDisk' line.
    Thanks in advance!

    I've got nowhere trying to create a reportdocument and pass it a multi-table dataset, so decided to do it the 'dirty' way by adding all the controls to an aspx page and referring to them.
    I know I can do this because the whole issue is printing a report that is currently viewed.
    So ... I've now added the ObjectDataSources to the page as well as the CrystalReportSource and the CrystalViewer ...
    .. I've tested the page and the report appears within the viewer with all the correct data ...
    ...so with a certain amount of excitement I've added the following line to the code behind file:
    rptSrcContract.ReportDocument.PrintToPrinter(1, true, 1, 1);
    ... then I run the page and predictably the first thing that comes up is:
    Unable to connect: incorrect log on parameters.
    .. this is madness!
    1) The data is retrieved and is in the correct format otherwise the report would not display.
    2) the rptSrcContract.ReportDocument exists ... otherwise it would not display in the viewer.
    So why does this want to logon to a database when the data is retrieved successfully and the security is running with a 'Network Services' account anyway????? (actually I know it has nothing to do with logging onto the database this is just the generic Crystal Reports 'I have a problem' message)
    ... sorry if this is a bit of an angry rant .. didn't get much sleep last night because of this ... all I want to be able to do is print a report from code .... surely this should be possible??

  • Single Table Inheritance with BDB and JPA

    I am using Java based BDB (4.0) and using @Entity, @PrimaryKey, etc. tags to define my model. I am using @SecondaryKey to define relationships, e.g.
    @Entity
    public class User {
    @PrimaryKey(sequence = "ids_seq")
    private int id;
    @SecondaryKey(relate = Relationship.MANY_TO_ONE, relatedEntity = Department.class, onRelatedEntityDelete = DeleteAction.CASCADE)
    private Integer albumID;
    @Entity
    public class Department {
    @PrimaryKey(sequence = "ids_seq")
    private int id;
    @SecondaryKey(relate = Relationship.ONE_TO_MANY, relatedEntity = User.class)
    private Set<Integer> userIDs;
    Now, I want to extend User and apply Single Table Inheritance, e.g.
    @Entity
    public class Employee extends User {
    Can someone suggest how to do it with BDB. Thanks.

    You may want to take a look at the "Entity Subclasses and Superclasses" section in the javadoc:
    http://www.oracle.com/technology/documentation/berkeley-db/je/java/com/sleepycat/persist/model/Entity.html
    Does this help?
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Performance issues involving tables S031 and S032

    Hello gurus,
    I am having some performance issues. The program involves accessing data from S031 and S032.  I have pasted the SELECT statements below.  I have read through the forums for past postings regarding performance, but I wanted to know if there is anything that stands out as being the culprit of very poor performance, and how it can be corrected.  I am fairly new to SAP, so I apologize if I've missed an obvious error.  From debugging the program, it seems the 2nd select statement is taking a very long time to process. 
    GT_S032: approx. 40,000 entries
    S031:    approx. 90,000 entries
    MSEG:    approx. 115,000 entries
    MKPF:    approx. 100,000 entries
    MARA:    approx. 90,000 entries
    SELECT
      vrsio          "Version
      werks          "Plan
      lgort          "Storage Location
      matnr          "Material
      ssour          "Statistic(s) origin                  
    FROM s032
    INTO TABLE gt_s032
    WHERE ssour = space                                           AND   vrsio = c_000                                           AND   werks = gw_werks.
    IF sy-subrc = 0.
      SELECT
        vrsio        "Version
        werks        "Plant
        spmon        "Period to analyze - month
        matnr        "Material
        lgort        "Storage Location
        wzubb        "Valuated stock receipts value
        wagbb        "Value of valuated stock being issued
      FROM s031
      INTO TABLE gt_s031
      FOR ALL ENTRIES IN gt_s032
      WHERE ssour = gt_s032-ssour                                     
      AND   vrsio = gt_s032-vrsio                                     
      AND   spmon IN r_spmon
      AND   sptag = '00000000'                                      
      AND   spwoc = '000000'                                          
      AND   spbup = '000000'                               
      AND   werks = gt_s032-werks
      AND   matnr = gt_s032-matnr
      AND   lgort = gt_s032-lgort
      AND   ( wzubb <> 0 OR wagbb <> 0 ).
    ELSE.
      WRITE: 'No data selected'(m01).
      EXIT.
    ENDIF.
    SORT gt_s032 BY vrsio werks lgort matnr.
    SORT gt_s031 BY vrsio werks spmon matnr lgort.
    SELECT
      p~werks          "Plant
      p~matnr          "Material
      p~mblnr          "Document Number
      p~mjahr          "Document Year
      p~bwart          "Movement type
      p~dmbtr          "Amount in local currency
      t~shkzg          "Debit/Credit indicator
    INTO TABLE gt_scrap
    FROM mkpf AS h
    INNER JOIN mseg AS p
       ON hmblnr = pmblnr
      AND hmjahr = pmjahr
    INNER JOIN mara AS m
       ON pmatnr = mmatnr
    INNER JOIN t156 AS t
       ON pbwart = tbwart
    WHERE h~budat => gw_duepr-begda
      AND h~budat <= gw_duepr-endda
      AND p~werks = gw_werks.
    Thanks so much for your help,
    Jayesh

    Issue with table s031 and with for all entries.
    Hi,
    I have following code in which select statement on s031 is
    taking long time and after that it shows a dump. What should I do instead of
    exceeding the time limit of execution of an abap program.
    TYPES:
      BEGIN OF TY_MTL,  " Material Master
        MATNR TYPE MATNR,   " Material Code
        MTART TYPE MTART,   " Material Type
        MATKL TYPE MATKL,   " Material Group
        MEINS TYPE MEINS,   " Base unit of Measure
        WERKS TYPE WERKS_D, " Plant
        MAKTX TYPE MAKTX,   " Material description (Short Text)
        LIFNR TYPE LIFNR,   " vendor code
        NAME1 TYPE NAME1_GP, " vendor name
        CITY  TYPE ORT01_GP, " City of Vendor
        Y_RPT TYPE P DECIMALS 3, "Yearly receipt
        Y_ISS TYPE P DECIMALS 3, "Yearly Consumption
        M_OPG TYPE P DECIMALS 3, "Month opg
        M_OPG1 TYPE P DECIMALS 3,
        M_RPT TYPE P DECIMALS 3, "Month receipt
        M_ISS TYPE P DECIMALS 3, "Month issue
        M_CLG TYPE P DECIMALS 3, "Month Closing
        D_BLK TYPE P DECIMALS 3, "Block Stock,
        D_RPT TYPE P DECIMALS 3, "Today receipt
        D_ISS TYPE P DECIMALS 3, "Day issues
        TL_FL(2) TYPE C,
        STATUS(4) TYPE C,
    END OF TY_MTL,
    BEGIN OF TY_OPG     , " Opening File
           SPMON TYPE SPMON,   " Period to analyze - month
           WERKS TYPE WERKS_D, " Plant
           MATNR TYPE MATNR,   " Material No
           BASME TYPE MEINS,
           MZUBB TYPE MZUBB,   " Receipt Quantity
           WZUBB TYPE WZUBB,
           MAGBB TYPE MAGBB,   " Issues Quantity
           WAGBB TYPE WAGBB,
    END OF TY_OPG,
    DATA :
           T_M  TYPE STANDARD TABLE OF TY_MTL INITIAL SIZE 0,
           WA_M TYPE TY_MTL,
           T_O  TYPE STANDARD TABLE OF TY_OPG INITIAL SIZE 0,
           WA_O TYPE TY_OPG.
    DATA: smonth1      TYPE spmon.  
    SELECT
      a~matnr
      a~mtart
      a~matkl
      a~meins
      b~werks
      INTO TABLE t_m FROM mara AS a
      INNER JOIN marc AS b
      ON a~matnr = b~matnr
    *  WHERE a~mtart EQ s_mtart
      WHERE a~matkl IN s_matkl
      AND b~werks IN s_werks
      AND b~matnr IN s_matnr   .
      endif.
    SELECT spmon
           werks
           matnr
           basme
           mzubb
           WZUBB
           magbb
           wagbb
            FROM s031 INTO TABLE t_o
            FOR ALL ENTRIES IN t_m
            WHERE matnr = t_m-matnr
            AND werks IN s_werks
              AND spmon le smonth1
              AND basme = t_m-meins.

  • Table View MULTI SELECT option and Event handling problems

    Hello All,
    I am facing problem while giving miltselect option in a table view. When i mention multiselect attribute in Select option in table view, i am unable to select all the rows which i want to select,because i have an event onRowSelection event activated so when i select a row then it will automatically go to the event and i am unable to do multiple select.
    Can you guys pl tell me is there any way thtat i can put check boxes in a table column and by that i can get values of row seelct and can perform my subsequent SQL operation.
    Also i am not able to navigate in table view through BYPAGE or BYLINE option. When I click on navigate button then page got refreshed and i lost data.
    One more query guys , can you pl tell me how can i store my internal table values from one event for the another event. I have used EXport/Import but internal table values get refreshed as page got refreshed on event switching/selection.
    Please respond.

    hye rahul.
      as i told you my second solution, will help you . the values remain in the corresponding UI elements.
    For example , you have a drop down and table view. both will trigger events. bind the data of the table at drop down event and bind the dat of the drop down at table event.
    event = cl_htmlb_manager=>get_event( runtime->server->request ).
    CASE event->id.
    when 'dd1'.                   drop down event is fired.
    bind data for drop down
    dd ?= cl_htmlb_manager=>get_data(
                                          request = runtime->server->request
                                          name    = 'dropdown'
                                          id      = dd_id'           " name of the drop down id
    along with drop down bind data for table view
        tbv ?= cl_htmlb_manager=>get_data(
                                          request = runtime->server->request
                                          name    = 'tableView'
                                          id      = 'tbv_id'           " name of the table view
    when 'tbv_id'.                   drop down event is fired.
    bind data for drop down
    dd ?= cl_htmlb_manager=>get_data(
                                          request = runtime->server->request
                                          name    = 'dropdown'
                                          id      = dd_id'           " name of the drop down id
    along with drop down bind data for table view
        tbv ?= cl_htmlb_manager=>get_data(
                                          request = runtime->server->request
                                          name    = 'tableView'
                                          id      = 'tbv_id'           " name of the table view
    This is how data should be binded in case of Stateless application. All the UI elemets must b binded again.. as the global data is refresed again.
    Hope this helps.
    Regards,
    Imran.

  • Memory and performance  when copying a sorted table to a standard table

    Hello,
    As you all probably know, it's not possible to use a sorted table as a tables parameter of a function module, but sometimes you want to use a sorted table in your function module for performance reasons, and at the end of the function module, you just copy it to a standard table to return to the calling program.
    The problem with this is that at that moment, the contents of the table is in memory twice, which could result in the well known STORAGE_PARAMETERS_WRONG_SET runtime exception.                                                                               
    I've been looking for ways to do this without using an excessive amount of memory and still being performant.  I tried four methods, all have their advantages and disadvantages, so I was hoping someone here could help me come up with the best way to do this.  Both memory and performance are an issue. 
    Requirements :
    - Memory usage must be as low as possible
    - Performance must be as high as possible
    - Method must work on all SAP versions from 4.6c and up
    So far I have tried 3 methods.
    I included a test report to this message, the output of this on my dev system is :
    Test report for memory usage of copying tables    
    table1[] = table2[]                                        
    Memory :    192,751  Kb                                    
    Runtime:    436,842            
    Loop using workarea (with delete from original table)      
    Memory :    196,797  Kb                                    
    Runtime:  1,312,839        
    Loop using field symbol (with delete from original table)  
    Memory :    196,766  Kb                                    
    Runtime:  1,295,009                                                                               
    The code of the program :
    I had some problems pasting the code here, so it can be found at [http://pastebin.com/f5e2848b5|http://pastebin.com/f5e2848b5]
    Thanks in advance for the help.
    Edited by: Dries Horions on Jun 19, 2009 1:23 PM
    Edited by: Dries Horions on Jun 19, 2009 1:39 PM
    Edited by: Dries Horions on Jun 19, 2009 1:40 PM
    Edited by: Dries Horions on Jun 19, 2009 1:40 PM

    I've had another idea:
    Create a RFC function like this (replace SOLI_TAB with your table types):
    FUNCTION Z_COPY_TABLE .
    *"*"Lokale Schnittstelle:
    *"  IMPORTING
    *"     VALUE(IT_IN) TYPE  SOLI_TAB
    *"  EXPORTING
    *"     VALUE(ET_OUT) TYPE  SOLI_TAB
    et_out[] = it_in[].
    ENDFUNCTION.
    and then try something like this in your program:
    DATA: gd_copy_done TYPE c LENGTH 1.
    DATA: gt_one TYPE soli_tab.
    DATA: gt_two TYPE soli_tab.
    PERFORM move_tables.
    FORM move_tables.
      CLEAR gd_copy_done.
      CALL FUNCTION 'Z_COPY_TABLE'
        STARTING NEW TASK 'ztest'
        PERFORMING copy_done ON END OF TASK
        EXPORTING
          it_in = gt_one[].
      CLEAR gt_one[].
      WAIT UNTIL gd_copy_done IS NOT INITIAL.
    ENDFORM.
    FORM copy_done USING ld_task TYPE clike.
      RECEIVE RESULTS FROM FUNCTION 'Z_COPY_TABLE'
       IMPORTING
         et_out        = gt_two[].
      gd_copy_done = 'X'.
    ENDFORM.
    Maybe this is a little bit faster than the Memory-Export?
    Edited by: Carsten Grafflage on Jul 20, 2009 11:06 AM

  • View links in multi table relations

    Is it advisable (in terms of performance e. g.), to create view links and view objects as local variables in multi table relations?
    examle: the jdev online help says to use
    such multi table relations like this:
    // A (one) -> B (many) -> C (many)
    ViewLink a2b = appMod.findViewLink("AtoB");
    ViewLink b2c = appMod.findViewLink("BtoC");
    ViewObject aV = a2b.getSource();
    ViewObject bV = a2b.getDestination();
    ViewObject cV = b2c.getDestination();
    while(aV.hasNext())
    Row aR = aV.next();
    while(bV.hasNext())
    Row bR = cV.next();
    while(cV.hasNext())
    Row cR = cV.next();
    I would rather keep everything concerning
    a, b, c together, especially when more
    tables (d, e, ...) are added, like this
    ViewLink a2b = appMod.findViewLink("AtoB");
    ViewObject aV = a2b.getSource();
    while(aV.hasNext())
    Row aR = aV.next();
    ViewLink b2c = appMod.findViewLink("BtoC");
    ViewObject bV = a2b.getDestination();
    while(bV.hasNext())
    Row bR = cV.next();
    ViewObject cV = b2c.getDestination();
    while(cV.hasNext())
    Row cR = cV.next();
    Is there anything to say against this approach (in term of performance for example). I am not sure to remeber,
    if this was the approach used in the HotelResevationSystem example.
    Thanks.
    Rx
    null

    For this to work you have to either build a view based on the entities from which you need attributes (joined by the FK) or build a ViewObject with the sql statement giving you all the attributes you need.
    The first case enables you the edit the attributes, the second gives you read only access to the attributes.
    What you try to do isn't a master-detail connection, you are doing a join of some tables.
    Timo

  • Multi-table mapping is not inserting into the primary table first.

    I have an inheritance mapping where the children are mapped to the parent table oids with the "Multi-Table Info" tab.
    One of children is not inserting properly. Its insert is attempting to insert into one of the tables from the "Additional Tables" of "Multi-Table Info" instead of the primary table that it is mapped to.
    The other children insert correctly. This child is not much different from those.
    I looked through the forums, but found nothing similiar.

    I would expect the Children to be inserted into both the primary table and the Additional Table? Is the object in question inserted into the primary table at all? Is the problem that it is being inserted into the Additional Table first? If it is, are the primary key names different? Is it a foreign key relationship?
    If the object in question has no fields in the additional table is it mapped to the additional table? (it should not be)
    Perhaps providing the deployment XML will help determine the problem,
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • ASCII character/string processing and performance - char[] versus String?

    Hello everyone
    I am relative novice to Java, I have procedural C programming background.
    I am reading many very large (many GB) comma/double-quote separated ASCII CSV text files and performing various kinds of pre-processing on them, prior to loading into the database.
    I am using Java7 (the latest) and using NIO.2.
    The IO performance is fine.
    My question is regarding performance of using char[i] arrays versus Strings and StringBuilder classes using charAt() methods.
    I read a file, one line/record at a time and then I process it. The regex is not an option (too slow and can not handle all cases I need to cover).
    I noticed that accessing a single character of a given String (or StringBuilder too) class using String.charAt(i) methods is several times (5 times+?) slower than referring to a char of an array with index.
    My question: is this correct observation re charAt() versus char[i] performance difference or am I doing something wrong in case of a String class?
    What is the best way (performance) to process character strings inside Java if I need to process them one character at a time ?
    Is there another approach that I should consider?
    Many thanks in advance

    >
    Once I took that String.length() method out of the 'for loop' and used integer length local variable, as you have in your code, the performance is very close between array of char and String charAt() approaches.
    >
    You are still worrying about something that is irrevelant in the greater scheme of things.
    It doesn't matter how fast the CPU processing of the data is if it is faster than you can write the data to the sink. The process is:
    1. read data into memory
    2. manipulate that data
    3. write data to a sink (database, file, network)
    The reading and writing of the data are going to be tens of thousands of times slower than any CPU you will be using. That read/write part of the process is the limiting factor of your throughput; not the CPU manipulation of step #2.
    Step #2 can only go as fast as steps #1 and #3 permit.
    Like I said above:
    >
    The best 'file to database' performance you could hope to achieve would be loading simple, 'known to be clean', record of a file into ONE table column defined, perhaps, as VARCHAR2(1000); that is, with NO processing of the record at all to determine column boundaries.
    That performance would be the standard you would measure all others against and would typically be in the hundreds of thousands or millions of records per minute.
    What you would find is that you can perform one heck of a lot of processing on each record without slowing that 'read and load' process down at all.
    >
    Regardless of the sink (DB, file, network) when you are designing data transport services you need to identify the 'slowest' parts. Those are the 'weak links' in the data chain. Once you have identified and tuned those parts the performance of any other step merely needs to be 'slightly' better to avoid becoming a bottleneck.
    That CPU part for step #2 is only rarely, if every the problem. Don't even consider it for specialized tuning until you demonstrate that it is needed.
    Besides, if your code is properly designed and modularized you should be able to 'plug n play' different parse and transform components after the framework is complete and in the performance test stage.
    >
    The only thing that is fixed is that all input files are ASCII (not Unicode) characters in range of 'space' to '~' (decimal 32-126) or common control characters like CR,LF,etc.
    >
    Then you could use byte arrays and byte processing to determine the record boundaries even if you then use String processing for the rest of the manipulation.
    That is what my framework does. You define the character set of the file and a 'set' of allowable record delimiters as Strings in that character set. There can be multiple possible record delimiters and each one can be multi-character (e.g. you can use 'XyZ' if you want.
    The delimiter set is converted to byte arrays and the file is read using RandomAccessFile and double-buffering and a multiple mark/reset functionality. The buffers are then searched for one of the delimiter byte arrays and the location of the delimiter is saved. The resulting byte array is then saved as a 'physical record'.
    Those 'physical records' are then processed to create 'logical records'. The distinction is due to possible embedded record delimiters as you mentioned. One logical record might appear as two physical records if a field has an embedded record delimiter. That is resolved easily since each logical record in the file MUST have the same number of fields.
    So a record with an embedded delimiter will have few fields than required meaning it needs to be combined with one, or more of the following records.
    >
    My files have no metadata, some are comma delimited and some comma and double quote delimited together, to protect the embedded commas inside columns.
    >
    I didn't mean the files themselves needed to contain metadata. I just meant that YOU need to know what metadata to use. For example you need to know that there should ultimately be 10 fields for each record. The file itself may have fewer physical fields due to TRAILING NULLCOS whereby all consecutive NULL fields at the of a record do not need to be present.
    >
    The number of columns in a file is variable and each line in any one file can have a different number of columns. Ragged columns.
    There may be repeated null columns in any like ,,, or "","","" or any combination of the above.
    There may also be spaces between delimiters.
    The files may be UNIX/Linux terminated or Windows Server terminated (CR/LF or CR or LF).
    >
    All of those are basic requirements and none of them present any real issue or problem.
    >
    To make it even harder, there may be embedded LF characters inside the double quoted columns too, which need to be caught and weeded out.
    >
    That only makes it 'harder' in the sense that virtually NONE of the standard software available for processing delimited files take that into account. There have been some attempts (you can find them on the net) for using various 'escaping' techniques to escape those characters where they occur but none of them ever caught on and I have never found any in widespread use.
    The main reason for that is that the software used to create the files to begin with isn't written to ADD the escape characters but is written on the assumption that they won't be needed.
    That read/write for 'escaped' files has to be done in pairs. You need a writer that can write escapes and a matching reader to read them.
    Even the latest version of Informatica and DataStage cannot export a simple one column table that contains an embedded record delimiter and read it back properly. Those tools simply have NO functionality to let you even TRY to detect that embedded delimiters exist let alone do any about it by escaping those characters. I gave up back in the '90s trying to convince the Informatica folk to add that functionality to their tool. It would be simple to do.
    >
    Some numeric columns will also need processing to handle currency signs and numeric formats that are not valid for the database inpu.
    It does not feel like a job for RegEx (I want to be able to maintain the code and complex Regex is often 'write-only' code that a 9200bpm modem would be proud of!) and I don't think PL/SQL will be any faster or easier than Java for this sort of character based work.
    >
    Actually for 'validating' that a string of characters conforms (or not) to a particular format is an excellent application of regular expressions. Though, as you suggest, the actual parsing of a valid string to extract the data is not well-suited for RegEx. That is more appropriate for a custom format class that implements the proper business rules.
    You are correct that PL/SQL is NOT the language to use for such string parsing. However, Oracle does support Java stored procedures so that could be done in the database. I would only recommend pursuing that approach if you were already needing to perform some substantial data validation or processing the DB to begin with.
    >
    I have no control over format of the incoming files, they are coming from all sorts of legacy systems, many from IBM mainframes or AS/400 series, for example. Others from Solaris and Windows.
    >
    Not a problem. You just need to know what the format is so you can parse it properly.
    >
    Some files will be small, some many GB in size.
    >
    Not really relevant except as it relates to the need to SINK the data at some point. The larger the amount of SOURCE data the sooner you need to SINK it to make room for the rest.
    Unfortunately, the very nature of delimited data with varying record lengths and possible embedded delimiters means that you can't really chunk the file to support parallel read operations effectively.
    You need to focus on designing the proper architecture to create a modular framework of readers, writers, parsers, formatters, etc. Your concern with details about String versus Array are way premature at best.
    My framework has been doing what you are proposing and has been in use for over 20 years by three different major nternational clients. I have never had any issues with the level of detail you have asked about in this thread.
    Throughout is limited by the performance of the SOURCE and the SINK. The processing in-between has NEVER been an issu.
    A modular framework allows you to fine-tune or even replace a component at any time with just 'plug n play'. That is what Interfaces are all about. Any code you write for a parser should be based on an interface contract. That allows you to write the initial code using the simplest possible method and then later if, and ONLY if, that particular module becomes a bottlenect, replace that module with one that is more performant.
    Your intital code should ONLY use standard well-established constructs until there is a demonstrated need for something else. For your use case that means String processing, not byte arrays (except for detecting record boundaries).

Maybe you are looking for

  • Installation of B1 Integration Component in SAP Business One 9.0 PL 08

    Hi Experts I have trouble with installing B1 integration component Below is the specs of the system 1. SAP 9.0 PL 08 2. SQL 2008 R2 3. Windows 7 32 Bit Initially this system was with installed with SAP 9.0 PL 05 and later upgraded to PL 08 During the

  • How to get picture on TV from MB?

    I hooked up my Mac Book to my Toshiba LCD TV but I get no input. I use the DVI to HDMI adapter and then to the hdmi input. But no picture when I switch to the hdmi input on the TV. Is there something in the computer that has to be turned on or select

  • How to get static attribute using it's name as a string

    I have an app with about 100 unique combo boxes (it's a gov't requirement, I'm not that bad a designer ) and I would like to define all the choices in a single separate file like this: package components { import mx.collections.ArrayCollection;     p

  • I keep getting error 205 when installing adobe creative cloud!

    When I try to install adobe creative cloud, it keeps giving me error code 205! I have tried everything and haven't solved this problem for months. I even upgraded my internet! The LBS log is here. The DLM log is here. Not sure which one is needed so

  • AP Checks printed twice on different days

    Good Morning Gurus- I am stumped on this one.  My client discovered that a payment run from a week before printed twice however cannot determine how when this happened.  The original payment run happened on June 13 and we believe it reprinted over th