JPA and indexed collections

Just out of interest, why doesn't JPA support indexed collections? Using ordered lists, if I wanted to insert an object at a particular point, I'd have to do something like this:
myObject.getOrderedList().add(2, childObject);
int i = 0;
for (ChildObject o : myObject.getOrderedList())
o.setIndex(i);
i++;
is that right? If there were indexed lists, I could just do this:
myObject.getIndexedList().add(2, childObject);
Or is there a simpler way to implement ordered lists in JPA?

Try using http://www.compass-project.org/

Similar Messages

  • Best practice for PK and indexes?

    Dear All,
    What is the best practice for making Primary Key and indexes? Should we keep them in the same tablespace as table or should we create a seperate tableapce for all indexes and Primary Key? Please note I am talking about a table that has 21milion rows at the moment and increasing 10k to 20k rows daily. This table is also heavily involved in daily reports and causing slow performance. Currently the complete table with all associated objects such as indexes and PK is stored in one seperate tablespace. If my way is right then please advise me how can I improve the performance of retrival or DML operation on this table?
    Thanks in advance..
    Zia Shareef

    Well, thanks for valueable advices... I am using Oracle 8i and let me tell you exact problem...
    My billing database has two major tables having almost 21 millions rows each... one has collection data and other one for invoices... many reports are showing the data with the joining of Customer + Collection + Invoices tables.
    There are 5 common fields in between invoices(reading) and collection tables
    YEAR, MONTH, AREA_CODE, CONS_CODE, BILL_TYPE(adtl)
    My one of batch process has following update and it is VERY VERY SLOW:
    UPDATE reading r
    SET bamount (SELECT sum(camount)
    FROM collection cl
    WHERE r.ryear = cl.byear
    AND r.rmonth = cl.bmonth
    AND r.area_code = cl.area_code
    AND r.cons_code = cl.cons_code
    AND r.adtl = cl.adtl)
    WHERE area_code = 1
    tentatively area_code(1) is having 20,000 consumers
    each consuemr may have 72 invoices and against these invoices it may have 200 rows in collection tables (system have provision to record partial payment against one invoice)
    NOTE: Please note presently my process is based on cursors so the above query runs for one consumer at one time but just for giving an idea I have made it for whole area.
    Mr. Yingkuan, can you please tell me how can I check that the table' statistics is not current and how can I make it current. Is it really effect performance?

  • Hashmap, Lists and Indexing

    Ive had this design problem come up twice already (in the last 3 weeks)
    so i figured its time to consult the sages.
    Though I know how to use a hashmap in
    code I dont really know anything technical about them (or any other map
    or hash---). I dont really have a grasp when to use them over ArrayList
    or LinkedList aside from obvious key/value situations.
    I see them used almost everywhere (I use them almost nowhere). I hear they are faster than get() methods in Lists (which I use always).
    Anyway, this is my requirement:
    I have an object that has a unique index (ordering value).
    this index isnt the position that the object has in the list its just some number or numbers that are sortable and unique (and usually not
    consequtive).
    Example:
    13
    105
    106
    418
    2107
    (not 1, 2, 3, 4, 5)
    I could have from 0-30,000+ of these objects. Right now im using a LinkedList because from what I understand they are better for random insertion (which there will be a lot of).
    Because I dont have technical knowledge of Hashmaps i dont
    use them - but I thought the index numbers would make good keys (theyre guarenteed to be unique).
    Heres the problem. When I grab a random object I (as a requirement)
    need to be able to get the previous/next object. Using the example above: if im at object indexed 105 id need to know that object indexed
    13 is the previous entry in the collection.
    I did research this and the Hashmap API specifically says:
    "This class makes no guarantees as to the order of the map; in particular, it does not guarantee that the order will remain constant over time."
    So Hashmap is unuseable for my purposes right?
    Right now i am using a binary search with a comparator to always
    insert objects into the LinkedList sorted ascendingly by that index number. I am using the same binary search and comparator to then retrieve an object quickly. To grab the previous object I get the object
    in the previous element and grab its index number.
    So my question is: are my assertions correct? is my linkedlist with a
    binary search for keeping it sorted and quickly accessible an ok method?
    thanks for any input!

    Thanks. The indices are unique.
    How do you get the previous entry in a TreeMap
    (because you wont
    have the key)?I think you would have to maintain a copy of the keys in an ArrayList and use Collections.sort() every time a new key is added/removed. Use the ArrayList to randomly pick a key and its adjacent keys which you would then in turn use to retrieve the object from a Map (because order here don't matter anymore).
    Message was edited by:
    CodeOnFire-again

  • [NewBie] Not able to connect JPA and Hibernate ?

    Hi,
    In last few days I have read some tutorial and started doing JPA and Hibernate tutorials. But I am not able to make it work. Can some one please point out what is that I am doing wrong? Here are the details of what I am doing
    I am using
    IDE : Eclipse EE Indigo
    Following jars
    antlr-2.7.6.jar
    commons-collections-3.2.jar
    dom4j-1.6.1.jar
    hibernate-annotations-3.4.0.GA.jar
    hibernate-commons-annotations-3.1.0.GA.jar
    hibernate-entitymanager-3.4.0.GA.jar
    hibernate-jpa-2.0-api.jar
    hibernate3-3.3.2.GA.jar
    javaee-api-5.0-3.jar
    javassist-3.9.0.GA.jar
    junit-4.8.2.jar
    log4j-1.2.12.jar
    slf4j-api-1.6.1.jar
    slf4j-simple-1.6.1.jar
    sqljdbc.jar => For connecting to MS SQL Server database
    junit-4.8.2 => For testing the application
    The target runtime is set as JBoss 5.0 with jars of JBoss 6.0
    Here is my persistence.xml. This file is under "src\META-INF"
    <?xml version="1.0" encoding="UTF-8"?>
    <persistence version="2.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
         <persistence-unit name="JH1" transaction-type="RESOURCE_LOCAL">
              <provider>org.hibernate.ejb.HibernatePersistence</provider>
              <class>entity.Users</class>
              <exclude-unlisted-classes>false</exclude-unlisted-classes>
              <properties>
                   <property name="hibernate.dialect" value="org.hibernate.dialect.SQLServerDialect"/>
                   <property name="hibernate.show_sql" value="true"/>
                   <property name="javax.persistence.jdbc.url" value="jdbc:sqlserver://localhost:1433;databaseName=TempEPMUser"/>
                   <property name="javax.persistence.jdbc.user" value="user"/>
                   <property name="javax.persistence.jdbc.password" value="password"/>
                   <property name="javax.persistence.jdbc.driver" value="com.microsoft.sqlserver.jdbc.SQLServerDriver"/>
                   <property name="javax.persistence.transactionType" value="RESOURCE_LOCAL"/>
              </properties>
         </persistence-unit>
    </persistence>
    Here is the Users.java code. This is the entity class created using the context menu item JPA Entities from tables
    package entity;
    import java.io.Serializable;
    import javax.persistence.*;
    import java.sql.Timestamp;
    import java.math.BigDecimal;
    * The persistent class for the Users database table.
    @Entity
    public class Users implements Serializable { 
    private static final long serialVersionUID = 1L;
    @Id
    @GeneratedValue(strategy=GenerationType.AUTO)
    @Column(name="Usr_UserID")
    private long usr_UserID;
    @Column(name="Usr_DeptId")
    private BigDecimal usr_DeptId;
    public Users() { 
    public long getUsr_UserID() { 
    return this.usr_UserID;
    public void setUsr_UserID(long usr_UserID) { 
    this.usr_UserID = usr_UserID;
    Here is the testing code TestUser.java
    package testentity;
    import java.util.List;
    import javax.persistence.EntityManager;
    import javax.persistence.EntityManagerFactory;
    import javax.persistence.EntityTransaction;
    import javax.persistence.Persistence;
    import javax.persistence.PersistenceException;
    import org.apache.log4j.BasicConfigurator;
    import org.junit.After;
    import org.junit.Before;
    import org.junit.Test;
    import entity.Users;
    public class TestUser {
         private EntityManagerFactory emf;
         private EntityManager em;
         @Before
         public void initEmfAndEm(){
              BasicConfigurator.configure();
              try {
                   emf = Persistence.createEntityManagerFactory("JH1");
              } catch (PersistenceException pe) {
                   System.out.println(pe.getMessage());
              em = emf.createEntityManager();
         @After
         public void cleanup() {
              em.close();
         @Test
         public void emptyTest() {
              EntityTransaction et = em.getTransaction();
              et.begin();
              @SuppressWarnings("unchecked")
              final List<Users> listUser = em.createQuery("select usr_DeptID from Users").getResultList();
              et.commit();
              for (Users usr: listUser){
                   int depid = usr.getUsr_DeptId().intValue();
                   System.out.println("User Department id is "+depid);
    When I execute the code I get following error message
    java.lang.UnsupportedOperationException: The user must supply a JDBC connection
    at org.hibernate.connection.UserSuppliedConnectionProvider.getConnection(UserSuppliedConnectionProvider.java:54)
    at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:446)
    at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:167)
    at org.hibernate.jdbc.JDBCContext.connection(JDBCContext.java:142)
    at org.hibernate.transaction.JDBCTransaction.begin(JDBCTransaction.java:85)
    at org.hibernate.impl.SessionImpl.beginTransaction(SessionImpl.java:1354)
    at org.hibernate.ejb.TransactionImpl.begin(TransactionImpl.java:38)
    at testentity.TestUser.emptyTest(TestUser.java:42)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
    at java.lang.reflect.Method.invoke(Unknown Source)
    at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
    at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
    at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
    at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
    at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
    at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31)
    at org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
    at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
    at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
    at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
    at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
    at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
    at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
    at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
    at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
    at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
    at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)

    Hi,
    Try following the JPA tutorials for WebLogic, JPA using EclipseLink or Hibernate will be the same if you stick to the specification. WebLogic 10.3.4.0 ships with a Java EE 6 compliant JPA 2.0 implementation in EclipseLink - you may want to give that a try.
    http://wiki.eclipse.org/EclipseLink/Examples/JPA/WebLogic_Web_Tutorial
    http://wiki.eclipse.org/EclipseLink/Examples/Distributed
    I also have a JBoss 6 tutorial for getting JPA working as well.
    BTW, your persistence unit is currently application managed - try switching to container managed - then most of your jar dependencies will go away as everything is already setup for you to do dependency injection via the container.
    http://wiki.eclipse.org/EclipseLink/Examples/JPA/JBoss_Web_Tutorial
    thank you
    Michael O'Brien
    http://www.eclipselink.org

  • UCM Indexer - Collection Rebuild Cycle errorring out

    We are trying to run the indexer on a huge set of jpg files. The current count is around 2 Million. After we start the collection rebuild cycle, we get following errors. Can someone advise how to fix this? The first two errors are repetitive and ultimately indexing is aborted when we get the last error.
    File is missing. Exception type is 'java.lang.Throwable'. [ Details ]
    An error has occurred. The stack trace below shows more information.
    !syFileMissing,65c68c754793bf43,1,65572!syExceptionType,java.lang.Throwable
    java.lang.Throwable
         at intradoc.common.IdcLogWriter.doMessageAppend(IdcLogWriter.java:80)
         at intradoc.common.Log.addMessage(Log.java:270)
         at intradoc.common.Log.errorEx2(Log.java:218)
         at intradoc.common.LoggingUtils.logMessage(LoggingUtils.java:99)
         at intradoc.common.SystemUtils.reportErrorEx(SystemUtils.java:555)
         at intradoc.common.SystemUtils.errEx(SystemUtils.java:640)
         at intradoc.common.SystemUtils.err(SystemUtils.java:631)
         at intradoc.indexer.CommonIndexerBulkLoader.handleLoadError(CommonIndexerBulkLoader.java:586)
         at intradoc.indexer.CommonIndexerBulkLoader.loadRecordWebChange(CommonIndexerBulkLoader.java:216)
         at intradoc.indexer.IndexerBulkLoader.createBulkLoad(IndexerBulkLoader.java:310)
         at intradoc.indexer.IndexerBulkLoader.doWork(IndexerBulkLoader.java:165)
         at intradoc.indexer.Indexer.doIndexing(Indexer.java:439)
         at intradoc.indexer.Indexer.buildIndex(Indexer.java:348)
         at intradoc.server.IndexerMonitor.doIndexing(IndexerMonitor.java:1012)
         at intradoc.server.IndexerMonitor$4.run(IndexerMonitor.java:832)
    *Failed to find indexable webviewable for content item 65c68c754793bf43 with revlabel 1 and dID 65572. Exception type is 'java.lang.Throwable'. [ Details ]*
    An error has occurred. The stack trace below shows more information.
    !csFailedToFindIndexableFile,65c68c754793bf43,1,65572!syExceptionType,java.lang.Throwable
    java.lang.Throwable
         at intradoc.common.IdcLogWriter.doMessageAppend(IdcLogWriter.java:80)
         at intradoc.common.Log.addMessage(Log.java:270)
         at intradoc.common.Log.errorEx2(Log.java:218)
         at intradoc.common.LoggingUtils.logMessage(LoggingUtils.java:99)
         at intradoc.common.SystemUtils.reportErrorEx(SystemUtils.java:555)
         at intradoc.common.SystemUtils.errEx(SystemUtils.java:640)
         at intradoc.common.SystemUtils.err(SystemUtils.java:631)
         at intradoc.indexer.CommonIndexerBulkLoader.handleLoadError(CommonIndexerBulkLoader.java:586)
         at intradoc.indexer.CommonIndexerBulkLoader.loadRecordWebChange(CommonIndexerBulkLoader.java:222)
         at intradoc.indexer.IndexerBulkLoader.createBulkLoad(IndexerBulkLoader.java:310)
         at intradoc.indexer.IndexerBulkLoader.doWork(IndexerBulkLoader.java:165)
         at intradoc.indexer.Indexer.doIndexing(Indexer.java:439)
         at intradoc.indexer.Indexer.buildIndex(Indexer.java:348)
         at intradoc.server.IndexerMonitor.doIndexing(IndexerMonitor.java:1012)
         at intradoc.server.IndexerMonitor$4.run(IndexerMonitor.java:832)
    *Indexing aborted. Aborting index build. Too many errors with finding and copying files to the appropriate place. [ Details ]*
    An error has occurred. The stack trace below shows more information.
    !csIndexerAbortedMsg!csIndexerRenameFailedAbort
    intradoc.common.ServiceException: !csIndexerRenameFailedAbort
         at intradoc.indexer.CommonIndexerBulkLoader.handleLoadError(CommonIndexerBulkLoader.java:608)
         at intradoc.indexer.CommonIndexerBulkLoader.loadRecordWebChange(CommonIndexerBulkLoader.java:222)
         at intradoc.indexer.IndexerBulkLoader.createBulkLoad(IndexerBulkLoader.java:310)
         at intradoc.indexer.IndexerBulkLoader.doWork(IndexerBulkLoader.java:165)
         at intradoc.indexer.Indexer.doIndexing(Indexer.java:439)
         at intradoc.indexer.Indexer.buildIndex(Indexer.java:348)
         at intradoc.server.IndexerMonitor.doIndexing(IndexerMonitor.java:1012)
         at intradoc.server.IndexerMonitor$4.run(IndexerMonitor.java:832)

    Some possible debugging ideas. (in no particular order)
    1. Run IdcAnalyze (see the Trouble shooting guide in the documentation for details on how to run it). It is used to look at the existing file system, DB, and index. It can offer options (in the form of a batch file you can choose to execute) to fix issues.
    2. Examine the files and CS settings to see what it is doing. In specific since this example is purely JPG files (non text non full text indexed) does this CS involve an IBR? The inbound Refinery settings may be off and keeping things from progressing to indexing. Conversion happens before indexing.
    While debugging do set Verbose logging on sections (at least these sections but you can do more if you like) system,index* (the * is a wild card to allow for all index tracing sections)

  • For and Build collect

    Hi
    My office DB is on Oracle version 10.2.0.3.0. Could you please advise if it is possible to use Bulk collect when fetching record from Cursor using For loop?
    Thanks in advance,
    Tinku

    Hi Tinku,
    Several people are suggested you regarding bulk bind and for all. It is required for your requirement. They said 100% correct. But still you want to know the bulk collect..Just have a look the below code. just for your knowledge sake I am giving.
    BULK BINDS
    There are two type of bulk binds are there which are FORALL and BULK COLLECT
    FORALL : It sends the all DML operations to SQL Engine at a time from PL/SQL engine
    Bulk Collect : It retrieve entire result set from SQL Engine to PL/SQL Engine at time.
    By using the bulk binds we can increase the performance. Using BULK binds will decrease the number of context switches between PLSQL and the SQL engine
    create or replace procedure bulkbind is
       type vdeptno is table of number index by binary_integer;
       type vdname is table of varchar2(40) index by binary_integer;
       type vloc is table of varchar2(40) index by binary_integer;
       ldeptno vdeptno ;
       ldname vdname;
       lloc vloc;
       cursor dept_cur is select deptno, dname, loc from dept;
    begin
        open dept_cur ;
        loop
          fetch dept_cur bulk collect into ldeptno, ldname, lloc limit 5;
          forall i in 1..ldeptno.count
           insert into tempdept values(ldeptno(i), ldname(i), lloc(i));
          exit when dept_cur%notfound;
        end loop;
    end;
    KPR

  • What is the difference between Topic Keywords and Index File Keywords?

    What is the difference between Topic Keywords and Index File Keywords? Any advantages to using one over the other? Do they appear differently in the generated index?
    RH9.0.2.271
    I'm using Webhelp

    Hi there
    When you create a RoboHelp project you end up with many different ancillary files that are used to store different bits of information. Many of these files bear the name you assigned to the project at the time you created it. The index file has the project name and it ends with a .HHK file extension. (HHK meaning HTML Help Keywords)
    Generally, unless you change RoboHelp's settings, you add keywords to this file and associate topics to the keywords via the Index pod. At the time you compile a CHM or generate other types of output, the file is consulted and the index is built.
    As I said earlier, the default is to add keywords to the Index file until you configure RoboHelp to add the keywords to the topics themselves. Once you change this, any keyword added will become a META tag in the topic code. If your keyword is BOFFO, the META tag would look like this:
    <meta name="MS-HKWD" content="BOFFO" />
    When the help is compiled or generated, the Index (.HHK) file is consulted as normal, but any topics containing keywords added in this manner are also added to the Index you end up with. From the appearance perspective, the end user woudn't know the difference or be able to tell. Heck, if all you ever did was interact with the Index pod, you, as an author wouldn't know either. Well, other than the fact that the icons appear differently.
    Operationally, keywords added to the topics themselves may hold an advantage in that if you were to import these topics into other projects, the Index keywords would already be present.
    Hopefully this helps... Rick

  • Different b/w index rebuild and index rebuild online

    hi..guys could u plz tel me difference between index rebuild and index rebuild online

    There is no difference in both the commands. Both will rebuild the index structure from the scratch.But in the first case with only Rebuild, as long as the index, its temporary segment is not prepared and merged together, index is not available for the other users for use. The Online clause makes the index available for others even while being rebuild.
    Rebuilding index online has the same concept of creating them online to some extent,
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96521/indexes.htm#3062
    HTH
    Aman....

  • DB02 view is empty on Table and Index analyses  DB2 9.7 after system copy

    Dear All,
                 I did the Quality refresh by System copy export/import method. ECC6 on HP-UX DB29.7.
    After Import Runstats status n Db02 for Table and Index analysis was empty and all value showing '-1'. Eventhough
    a) all standard backgrnd job scheduled in sm36
    b) Automatic runstats are enabled in db2 parameters
    c) Reorgchk all scheduled periodically from db13 and already ran twice.
    4) 'reorgchk update statistics on table all' was also ran on db2 level.
    but Run stats staus in db02 was not getting updated. Its empty.
    Please suggest.
    Regards
    Vinay

    Hi Deepak,
    Yes, that is possible (but only offline backup). But for the new features like reclaimable tablespace (to lower the high watermark)
    it's better to export/import with systemcopy.
    Also with systemcopy you can use index compression.
    After backup and restore you can have also reclaimable tablespace, but you have to create new tablespaces
    and then work with db6conv and online table move to move one tablespace online to the new one.
    Best regards,
    Joachim

  • Ceartion of User Defined Field in EXCHANGE RATE AND INDEXES

    Hi Experts,
                     I want to create  User Defined Field in EXCHANGE RATE AND INDEXES.But while creating the UDF from User Defined Field-Management unable to find the table for it.Write now My Client are using SAP B1 2007 Ptach-08.Is there any way out to create user defined field in EXCHANGE RATE AND INDEXES.
    Plz help me out on this issue.
    with regards,
    Pankaj K and Kamlesh N

    Pankaj,
    When you do the Manage User Fields area to define a UDF, all the possible areas where UDF's can be created in B1 is listed.  You would be able to create UDF's only on these.
    Suda

  • Table files and Index files 2GB on Windows 2003 Server SP2 32-bit

    I'm new to Oracle and I've ran into the problem where my Table files and Index files are > 2GB. I have an Oracle instance running version 10.2.0.3.0. I have a number of tables file and index files that have a current files size of 1.99GB. My Oracle crashes about three times a week because of a "Write Fault/Failure. I've detemined that the RDBM is trying to write a index or table files > 2GB. When this occurs it crashes.
    I've been reading the Oracle knowledge base that it suggest that there is a fix or release of Oracle 10g to resolve this problem. However, I've been unable to locate any fix or release to address my issue. Does such a fix or release exist? How do I address this issue? I'm from the world of MS SQL and IBM DB2 and we don't have this issue. I am running and NTFS files system. Could this be issue be related to an Windows Fix?
    Surely Oracle can handel databases > 2GB.
    Thanks in advance for any help.

    After reading your response it appears that my real problem has to do with checking pointing. I've included below a copy of the error message:
    Oracle process number: 8
    Windows thread id: 3768, image: ORACLE.EXE (CKPT)
    *** 2008-07-27 16:50:13.569
    *** SERVICE NAME:(SYS$BACKGROUND) 2008-07-27 16:50:13.569
    *** SESSION ID:(219.1) 2008-07-27 16:50:13.569
    ORA-00206: Message 206 not found; No message file for product=RDBMS, facility=ORA; arguments: [3] [1]
    ORA-00202: Message 202 not found; No message file for product=RDBMS, facility=ORA; arguments: [D:\ELLIPSE_DATABASE\CONTROL\CTRL1_ELLPROD1.CTL]
    ORA-27072: Message 27072 not found; No message file for product=RDBMS, facility=ORA
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    error 221 detected in background process
    ORA-00221: Message 221 not found; No message file for product=RDBMS, facility=ORA
    ORA-00206: Message 206 not found; No message file for product=RDBMS, facility=ORA; arguments: [3] [1]
    ORA-00202: Message 202 not found; No message file for product=RDBMS, facility=ORA; arguments: [D:\ELLIPSE_DATABASE\CONTROL\CTRL1_ELLPROD1.CTL]
    ORA-27072: Message 27072 not found; No message file for product=RDBMS, facility=ORA
    OSD-04008: WriteFile() failure, unable to write to file
    O/S-Error: (OS 33) The process cannot access the file because another process has locked a portion of the file.
    Can you tell me why I'm having issues with checking point and the control file?
    Can I rebuild the control file if it s corrupt?
    The problem has been going on since April 2008. I'm takening over the system.
    Thanks

  • 'unable to connect' and  index.php

    Hi. 
    I am developing a Web Site and index.php is my point of entry.
    Document Root     Library/WebServer/Documents
    so my path is:       Library/WebServer/Documents/dwwdSite
    httpd.conf file is modified to add index.php  and have it listed first.
    <IfModule dir_module>
    DirectoryIndex   index.php   index.html
    <IfModule>
    Troubleshooting:
    I was using Netbeans IDE and when I ran index.php it opened in the browser.
    When I launched 'any' of my index.php files from Netbeans IDE, they opened correctly in the brower
    I am NOW using DreamweaverCC and when I run the index.php Error Message ' Unable to Connect'.
    For the last 2 days I have been working on this and I am completely stuck.
    This morning I thought of another way to test the 'unable to connect' error.
    I decided to copy this same file into Netbeans IDE and I NOW get the same Error Message ' Unable to Connect'
    when running index.php from Netbeans.
    Somehow, my settings are not correctly configured anymore.
    Here are my screenshots from Dreamweaver > manage sites.
    I believe that this is a rather simple fix that I am somehow not seeing.
    Maybe some can spot some mistake.
    I appreciate your help and explanation.

    Hi Sudarshan.
    You have been very kind and very clear in your explanation.
    One of the very best that I have ever communicated with on this forum !
    I have checked many, many things.
    I wanted to make certain that I killed apache and restarted it.
    I do not think it is RUNNING at all.
    1.
    myNameMacBookPro:~ myName$ pwd
    /Users/myName
    2.
    myNameMacBookPro:~ myName$ ps -ax | grep http
    1892 ttys000    0:00.00 grep http
    3.
    myNameMacBookPro:  myName$ hostname
    local
    4.
    myNameMacBookPro:etc myName$ cd apache2
    reginaMacBookPro:apache2 myName$ ls
    extra            httpd.conf.pre-update    mime.types        other
    httpd.conf        magic            original        users
    5.
    myNameMacBookPro:apache2 myName$ sudo nano httpd.conf
    myNameMacBookPro:apache2 myName$ sudo apachectl -k restart
    Syntax error on line 1 of /private/etc/apache2/users/myNameBU.conf:
    Invalid command '{\\rtf1\\ansi\\ansicpg1252\\cocoartf1187\\cocoasubrtf370', perhaps misspelled or defined by a module not included in the server configuration
    httpd not running, trying to start
    myNameMacBookPro:apache2 myName$
    6.
    The above code may be a hint at part of the problem.
    I created myNameBU.conf   as a backup when I was editing the file.
    QUESTION.  Why is /private/etc/apache2/users/myNameBU.conf:
    being referenced above ?
    7.
    I scanned my ports from preferences and found these two which I believe are what you stated they should be.
    port 80 is html
    port 8080 is html-alt
    8.
    I have modified this line and tried it both ways restarting apachectl each time.
    AllowOverride None
    AllowOverride All
    9.
    And here is a part of my httpd.conf.
    excerpts.
    httpd.conf
    # User/Group: The name (or #number) of the user/group to run httpd as.
    # It is usually good practice to create a dedicated user and group for
    # running httpd, as with most system services.
    User _www
    Group _www
    # DocumentRoot: The directory out of which you will serve your
    # documents. By default, all requests are taken from this directory, but
    # symbolic links and aliases may be used to point to other locations.
    DocumentRoot "/Library/WebServer/Documents"
    # Each directory to which Apache has access can be configured with respect
    # to which services and features are allowed and/or disabled in that
    # directory (and its subdirectories).
    # First, we configure the "default" to be a very restrictive set of
    # features.
    <Directory />
        Options FollowSymLinks
        AllowOverride None
        Order deny,allow
        Deny from all
    </Directory>
    10.
    I took a look at this too.
    usr/bin
    #path to httpd binary including options if necessary
    HTTPD = "usr/sgin/httpd"
    # pick up any environmental variables if test - f /usr/sbin/envvars; then ./usr/sbin/envvars
    fi
    STATUSURL = "http://localhost:80/server-status
    11.
    Just a reminder.
    I started writing .php scripts with Netbeans IDE.
    The programs ran:  I got output in the brower.  Things worked just fine !
    I started writing .php scripts with DreamweaverCC.
    The programs NEVER ran.
    I have always gotten 'Unable to Connect"
    Firefox/Safari can't establish a connection to the server at localhost.'
    12.
    QUESTION.
    At one point I was on the phone with a member of the Adobe Technical Support Team.
    They connect to my desktop (only) remotely.  I am very cautious about this.
    Could something have been inadvertently changed when they did this ?
    They do need to connect through a PORT - yes ?
    This is very frustrating and I am loosing days of work.
    I want to get back to Web Development
    I love Adobe products but this should not be such a huge obstacle.
    When I called Technical support (after much troubleshooting individually and on this forum)
    the fellow told me that the ONLY support FTP - not LOCALHOST.
    This makes no sense.  People develop 'locally' then put it ftp it in the production stage.
    Again, I appreciate your assistance.

  • 'unable to connect' and 'localhost' and index.php and dreamweaverCC

    Hi. 
    I am developing a Web Site and index.php is my point of entry.
    Document Root Library/WebServer/Documents
    so my path is: Library/WebServer/Documents/dwwdSite
    httpd.conf file is modified to add index.php  and have it listed first.
    <IfModule dir_module>
    DirectoryIndex   index.php   index.html
    <IfModule>
    Troubleshooting:
    I was using Netbeans IDE and when I ran index.php it opened in the browser.
    When I launched 'any' of my index.php files from Netbeans IDE, they opened correctly in the brower
    I am now using DreamweaverCC and when I run the index.php Error Message ' Unable to Connect'.
    For the last 2 days I have been working on this and I am completely stuck.
    This morning I thought of another way to test the 'unable to connect' error.
    I decided to copy this same file into Netbeans IDE and I NOW get the same Error Message ' Unable to Connect'
    when running index.php from Netbeans.
    Somehow, my settings are not correctly configured anymore.
    Here are my screenshots from Dreamweaver > manage sites.
    I believe that this is a rather simple fix that I am somehow not seeing.
    Maybe some can spot some mistake.
    I appreciate your help and explanation.

    Site window settings.
    Site Name: dwwdSite
    Local site folder: /Library/WebServer/Documents/dwwdSite
    Server window settings.
    Server Name: testing Server
    Address: Macintosh HD/Library/WebServer/Documents/dwwdSite
    Connect using: Local/network
    Testing: yes (checked)
    Server folder: /Library/WebServer/Documents/dwwdSite
    (I also tried this: Server folder: /Library/WebServer/Documents)
    Web URL: http://www.localhost/dwwdSite
    Server Advanced tab: (within server window settings)
    Testing server: PHP MySQL
    Advanced Settings window.
    Local info: Web URL: http://www.localhost/dwwdSite
    Enable cache: yes (checked)

  • Conforming and Indexing Errors, Media Pending, Audio won't play in timeline

    I'm working on a desktop PC which is running Windows 7 Professional 64-bit and Adobe Premiere Pro (version CS5.5). It's currently utilizing a second gen. 3.4Ghz i7 2600 processor, 16GB of 1600Mhz RAM, 64GB solid-state drive and a ASUS P8Z68-V Intel Z68 Motherboard with onboard audio (Realtek ALC892 chipset) and onboard video. My problem is this:
    The conforming and indexing of all of my imported media never seems to finish regardless of how many times I reopen the project file and wait for it. On the lower right-hand portion of the screen, next to the conforming/indexing progress bar, is a little red "X". When clicked, it pops up with a list of errors that read: "An unexpected error occurred while performing a conform action on the following file...". As a result, my audio channels have no waveform and during playback there are no audible tones or levels. On some video clips there's just text that reads "Media Pending". This only appears to happen with project files that I saved on external hard drives, and I suspect it has something to do with the Media Cache Files folder and how Premiere Pro locates these conform/index files. I've also encountered this problem in CS3 and CS4.
    I have a few questions:
    1) How do I avoid error messages in regards to indexing and conforming
    2) How do you know when indexing/conforming has completed itself? (there doesn't seem to be a progress log or a list of commands/executions)
    3) Indexing and conforming appears to be an automatic process, but is there a way to do it manually?
    4) What's the best way to setup your media cache files when you click EDIT > PREFERENCES > MEDIA?
    5) If I have approximately 1 hour of footage, what's an average wait time for conforming/indexing? What about 5 hours of footage? 10?
    6) Adobe recommends not editing until the conforming and indexing has completed itself-- how important is this?
    7) Sometimes it appears as though the conforming and indexing has finished, but then I still have problems with playback. Do I have to reopen the project for it to continue with the conforming/indexing progress? I've already determined that the video file I'm working with is intact and free of any corruption.
    I'm fine with having to wait for a project to conform and index, but it never seems to complete itself! Any help regarding this matter would be greatly appreciated.

    Harm filled in pretty much all the salient details, but I'll do another pass here.
    1) How do I avoid error messages in regards to indexing and conforming
    Two parts here.  One, conforming only happens for certain media files, ie the ones where performance is critical and we can't depend on extracting the audio fast enough for realtime playback.  That's basically anything in an .mpeg wrapper, or AVCHD material.  So if you edit XDCAM HD/EX or P2, or RED, or even AVIs or QT, those formats don't require audio conforming.
    If you're stuck editing AVCHD or MPEG2, then it needs to conform.  But, that being said, you shouldn't be getting errors in the first place. I think it's related to your external drives.  More below...
    2) How do you know when indexing/conforming has completed itself? (there doesn't seem to be a progress log or a list of commands/executions)
    Nope, you have a progress status bar indicating which file it's working on.  If there's an error, it shows up in the events panel.
    3) Indexing and conforming appears to be an automatic process, but is there a way to do it manually?
    No.
    4) What's the best way to setup your media cache files when you click EDIT > PREFERENCES > MEDIA?
    While some people like having the check box for having the conform files beside the media, I hate it.  Yes, it means that if you move the project to a different system & reopen, it means that you potentially can avoid recreating CFA files, but I find the drive littering not worth it.  I much prefer having setting the Media prefs to point to a specific media drive.  Usually a raid, if available.  Definitely not an external drive that you disconnect & walk away with.  If you don't have a permanent raid on your system, then preferably a dedicated internal drive for media (think along the lines as your Photoshop 'scratch disk').  Failing that, leave it on your C: drive, although with a 64 Gig SSD, you probably don't have much room for transient temporaries.
    5) If I have approximately 1 hour of footage, what's an average wait time for conforming/indexing? What about 5 hours of footage? 10?
    Like Harm said.  Totally dependant on the media container & the speed of your drive i/o.  The conforming is iterating through the entire file & pulling audio data, so it's not CPU intensive, it's all i/o.
    6) Adobe recommends not editing until the conforming and indexing has completed itself-- how important is this?
    If you're trying to play/scrub while conforming, it's going to be pokey.  Esp. if you're trying to access the file that's actively being conformd.  As I just said, we're hitting the files for all the audio.  The i/o is being saturated already, so unless you have a stellar raid, you don't have much headroom.
    7) Sometimes it appears as though the conforming and indexing has finished, but then I still have problems with playback. Do I have to reopen the project for it to continue with the conforming/indexing progress? I've already determined that the video file I'm working with is intact and free of any corruption.
    You should be good to go.  Sounds like there's something else at play here.
    Okay, back to what I think is wrong:  you don't mention what kind of external drives you're using.  You're making a bad assumption that blowing away conformed files & doing a reconform is buggy - I doubt it, as that's the same process that happened when you initially brought in the files.  I've blown away my media cache folder multiple times and have never seen failures on reconform.  So it's got to be one of two things:  either a read error from the source when attempting to pull the audio, or a write error to the destination.  Now I don't know where you currently are pointing the media cache directory, or what your source drive is, so I can only speculate.
    My suggestion is to do some elimination.   Copy one of the files that failed on you to your C drive, & target your media cache directory also to C:.  Pick a new project, import your copied file, confirm that it conforms correctly & behaves.   Then, try to use the same clip from your external drive, keeping the media cache to C:.  If that's still good, then try targeting another (local/internal) drive as your media cache target; close/restart, then import the clip from C:, and then import the clip from your external drive.  This troubleshooting should give us something.
    PS, if you're trying to edit from external USB drives, good luck.  I find it a major PITA that I avoid as much as possible.  Firewire isn't much better.  I know some people do it successfully, but I think it's a road fraught with peril.  These devices are generally not designed for heavy duty I/O and a flaky connection or drive is nothing but pain.
    Cheers

  • Report to find all table and index sizes

    Hi all,
    Good day..
    Is there any report.sql or so to find out the sizes of all the tables and indexes in a database.
    thanks,
    baskar.l

    1.To get table size
    What will be the table size if?
    <or>
    break on report
    set line 200
    COMPUTE SUM LABEL "Total Reclaimable Space" OF "KB Free Space" ON REPORT
    column "Table Size" Format a20
    column "Actual Data Size" Format a20
    column "KB Free Space" Format "9,99,999.99"
    select table_name,
    round((blocks*8),2)||'kb' "Table size",
    round((num_rows*avg_row_len/1024),2)||'kb' "Actual Data size",
    pct_free,
    round((blocks*8),2) - (round((blocks*8),2)*pct_free/100) - (round((num_rows*avg_row_len/1024),2)) "KB Free Space"
    from user_tables
    where round((blocks*8),2) - (round((blocks*8),2)*pct_free/100) - (round((num_rows*avg_row_len/1024),2)) > 0
    order by round((blocks*8),2) - (round((blocks*8),2)*pct_free/100) - (round((num_rows*avg_row_len/1024),2)) desc
    2.To get index size
    How to size the Index
    Hth
    Girish Sharma

Maybe you are looking for

  • Not showing recent messages in Yahoo Groups

    I have noticed in the past week or so that messages to a couple of Yahoo Groups are not posting promptly. Right now Firefox is showing me no messages posted since 7:17 a.m. even though I can see in Internet Explorer that messages have been posted at

  • Pavilion G6 2301ax Laptop Bluetooth not working.

    Problem since last week. my Ralink RT3290 wifi adapter + 4.0 bluetooth stopped working i replaced that with a new one at first it was showing Bluetooth and Wifi in device manager and wifi is working ok. but bluetooth was unable to connect to any devi

  • Safari 5.1.1 is not recognizing my html5 form validation "required".

    Safari 5.1.1 is not recognizing my html5 form validation "required". I've tried writing it several different ways with no luck. The "placeholder" validator does work. It works fine with Firefox. Help!

  • More flashing amber on my Airport express-help?

    I too have the flashing amber on my new Airport Express wireless station. Here's my deal: I already have a Dlink router hooked up to my cable modem. The ethernet line is connected from router to one desktop G3 and working. I'm also pulling wirless fo

  • Skype webcam for lg tv

    Can someone tell me why you are advertising a webcam specifically for lg televisions when lg don't have Skype up and running on any of their televisions