Database file: forms disapeared but size of file remained

After saving a Star Office database file the computer closed down due to powe management setting. At opening the file again, the forms contained in it are invisible. How can I make them visible again?

Yes, psadmin worked. My command was:
psadmin register-portlet -u amadmin -f password.txt -p portal1 -g myapp.ear
I think that bypasses the file size, not completely sure. I found several files called web.xml and changed them all but it did not make any difference to deploy portlet on the console
So I think this is the right answer.
Thanks

Similar Messages

  • Database File Size Management

    Hello,
    I have an application that stores large records (capped at 1M each in size, but on average around 0.5 M), and can at any given time have hundreds of thousands of such records. We're using a BTree structure, and BDB has thus far acquitted itself rather well.
    One serious issue we are facing, however, is that the size of the database keeps growing. My expectation was that the database file would grow only on an as-needed basis, but would not grow if records have been deleted. Our application is transactional and replicated. We do have the txnNoSync flag set to true, and checkpoint every 60 seconds (this number is tunable). We have setReverseSplitOff set to true. Could this be the problem? Our page size is set to the maximum possible size, i.e., 65536 bytes.
    Thanks in advance,
    Prashanth

    Hi Prashanth,
    No, it has nothing to do with turning reverse splits off, the page size or anything else.
    It's just that Btree (and Hash) databases are grow-only. Although you free up space by deleting records within a Btree database that space will not be returned to the filesystem, but it's reused where possible. Here is more information on disk space considerations:
    http://www.oracle.com/technology/documentation/berkeley-db/db/ref/am_misc/diskspace.html
    Also, to add to the information there, you could call DB->compact():
    http://www.oracle.com/technology/documentation/berkeley-db/db/api_c/db_compact.html
    or use the db_dump and db_load utilities to do the compaction offline (that is, stopping all writes on the database). Note that if you use your own Btree comparison function you must modify the source code for the utilities so that they'll be aware of the order imposed.
    Let me know if you need more information on this.
    Regards,
    Andrei

  • Syslog database file size is growing

    Hi ,
    I have a Cisco Works Server ( LMS 2.6 Version) which had a issue with Syslog Severity level summary report which was hanging when ever we run a job and report job used to fail always.Also i have observed SyslogFirst.db,SyslogSecond.db,SyslogThird.db database file are grown to 90Gb Each due to which RME was very slow.
    I have done a RME database Reinitilization and post to that Syslog Severity level summary report started working properly.Also the file file size of SyslogFirst.db,SyslogSecond.db,SyslogThird.db are reduced to almost 10 MB.But when i see today the SyslogThird.db file increase to 4GB again.
    I need a help on what causing this files (SyslogThird.db ) to grow so fast.Is there any option whcih i need to see in cisco work which stop this files to grow so fast.Please help me on this issue.
    Thanks & Regds,
    Lalit

    Hi Joseph,
    Thanks for your reply.SyslogThird.db is not growing now but my SeverityWise Summary report is stoped again.If i see status in the RME Jobs it says SeverityWise Summary report failed.I have checked the SyslogThird.db file size and found it was 20Gb.Is this is failing because 20Gb file size.
    Please needs your valuable inputs.Thanks once gain.After RME reinitilization it was only 1Gb and report was getting generated.
    Thanks & Regds,
    Lalit

  • I have an iPod video. My computer crashed, I have reinstalled iTunes, luckily I had back up of music so that's ok. I have photos on my iPod that I would like to transfer back to the computer as I lost them!! But it is a database file and I cannot open it.

    My computer crashed and I lost my photos, but had a backup of my music. How can I open the database file to transfer the pics back to my computer? Help please.

    You won't be able to, just by navigating out the iPod's photos folder in iTunes. You'll likely need the help of some sort of third party software to get the job done for you.  One thing you'll want to keep in mind about these photos is that they are no longer in their full resolution, but instead are scaled down thumbnail versions that were optimized for viewing on your iPod's Video smaller 2.5" LCD display.
    Here is one such product.
    http://www.macroplant.com/phonetopc/
    Google for more.
    B-rock

  • Store and Display doc/pdf files in the database using Forms

    Hi all,
    How can i store and display doc/pdf files in the database using Forms 10g?.
    Arif

    How to get up and running with WebUtil 1.06 included with Oracle Developer Suite 10.1.2.0.2 on a win32 platform
    Solution
    Assuming a fresh "Complete" install of Oracle Developer Suite 10.1.2.0.2,
    here are steps to get a small test form running, using WebUtil 1.06.
    Note: [OraHome] is used as an alias for your real oDS ORACLE_HOME.
    Feel free to copy this note to a text editor, and do a global find/replace on
    [OraHome] with your actual value (no trailing slash). Then it is easy to
    copy/paste actual commands to be executed from the note copy.
    1) Download http://prdownloads.sourceforge.net/jacob-project/jacob_18.zip
      and extract to a temporary staging area. Do not attempt to use 1.7 or 1.9.
    2) Copy or move jacob.jar and jacob.dll
      [JacobStage] is the folder where you extracted Jacob, and will end in ...\jacob_18
         cd [JacobStage]
         copy jacob.jar [OraHome]\forms\java\.
         copy jacob.dll [OraHome]\forms\webutil\.
      The Jacob staging area is no longer needed, and may be deleted.
    3) Sign frmwebutil.jar and jacob.jar
      Open a DOS command prompt.
      Add [OraHome]\jdk\bin to the PATH:
         set PATH=[OraHome]\jdk\bin;%PATH%
      Sign the files, and check the output for success:
         [OraHome]\forms\webutil\sign_webutil [OraHome]\forms\java\frmwebutil.jar
         [OraHome]\forms\webutil\sign_webutil [OraHome]\forms\java\jacob.jar
    4) If you already have a schema in your RDBMS which contains the WebUtil stored code,
      you may skip this step. Otherwise,
      Create a schema to hold the WebUtil stored code, and privileges needed to
      connect and create a stored package. Schema name "WEBUTIL" is recommended
      for no reason other than consistency over the user base.
      Open [OraHome]\forms\create_webutil_db.sql in a text editor, and delete or comment
      out the EXIT statement, to be able to see whether the objects were created witout
      errors.
      Start SQL*Plus as SYSTEM, and issue:
         CREATE USER webutil IDENTIFIED BY [password]
         DEFAULT TABLESPACE users
         TEMPORARY TABLESPACE temp;
         GRANT CONNECT, CREATE PROCEDURE, CREATE PUBLIC SYNONYM TO webutil;
         CONNECT webutil/[password]@[connectstring]
         @[OraHome]\forms\create_webutil_db.sql
         -- Inspect SQL*Plus output for errors, and then
         CREATE PUBLIC SYNONYM webutil_db FOR webutil.webutil_db;
      Reconnect as SYSTEM, and issue:
         grant execute on webutil_db to public;
    5) Modify [OraHome]\forms\server\default.env, and append [OraHome]\jdk\jre\lib\rt.jar
      to the CLASSPATH entry.
    6) Start the OC4J instance
    7) Start Forms Builder and connect to a schema in the RDBMS used in step (4).
      Open webutil.pll, do a "Compile ALL" (shift-Control-K), and generate to PLX (Control-T).
      It is important to generate the PLX, to avoid the FRM-40039 discussed in
      Note 303682.1
      If the PLX is not generated, the Webutil.pll library would have to be attached with
      full path information to all forms wishing to use WebUtil. This is NOT recommended.
    8) Create a new FMB.
      Open webutil.olb, and Subclass (not Copy) the Webutil object to the form.
      There is no need to Subclass the WebutilConfig object.
      Attach the Webutil.pll Library, and remove the path.
      Add an ON-LOGON trigger with the code
             NULL;
      to avoid having to connect to an RDBMS (optional).
      Create a new button on a new canvas, with the code
             show_webutil_information (TRUE);
      in a WHEN-BUTTON-PRESSED trigger.
      Compile the FMB to FMX, after doing a Compile-All (Shift-Control-K).
    9) Under Edit->Preferences->Runtime in Forms Builder, click on "Reset to Default" if
      the "Application Server URL" is empty.
      Then append "?config=webutil" at the end, so you end up with a URL of the form
          http://server:port/forms/frmservlet?config=webutil
    10) Run your form.sarah

  • Is there a size limit on the iPod for the song database file ?

    I have been running into the same issue for the last 2 weeks: Once I exceed 110 GB on my iPod Classic 160 GB, iTunes is no longer able to update the database file on the iPod.
    When clicking (on the iPod) on Settings/About, the iPod displays the wrong number of songs. Also, the iPod is no longer able to play any songs.
    Is there a size limit for the database file on the iPod ?
    I am making excessive use of the 'comments' field in every song that I load to the iPod. This increases the size of the database file.
    Is there a way, that I can manually update the database file on the iPod ?
    Thanks for your help !

    did you experience some crashing of the ipod as well? do you know how many separate items you had?

  • I inherited an iPod classic from the passing of a family member. I dont have any passwords to access it. I have a large itunes database on my pc, but I want to add the files on the inherited ipod to my itunes. I do not want to lose files on itunes or ipod

    I inherited an iPod classic from the passing of a family member. I dont have any passwords to access it. I have a large itunes database on my pc, but I want to add the files on the inherited ipod to my itunes. I do not want to lose files on itunes or ipod

    but I want to add the files on the inherited ipod to my itunes
    If the media was purchased from the iTunes store, it cannot be transferred to another iTunes account.

  • Air - Sqlite with Adobe Air insert data in memory, but do not record on the database file

    I have the code:
    var statement:SQLStatement = new SQLStatement();
    statement.addEventListener(SQLEvent.RESULT, insertResult);
    statement.addEventListener(SQLErrorEvent.ERROR, insertError);
    statement.sqlConnection = sqlConnection;
    statement.text = "insert into table values('"+TINome.text+"','"+TISerial.text+"') ";
    statement.execute();
    This run without error and the data is inserted (i dont know where), but when i see into database with firefox sqlite manager, the database is empty! But the Air continue run for a time like the data was recorded on the database file. I dont know what is happen with it.
    Help please!

    Toplink In Memory was developed by our project to solve this problem. It allows us to run our test either in memory or against the database.
    In memory, we basically stub out the database. This allows us to speed up our tests about 75x (In memory we run 7600 tests in 200 secs, it takes about 5 hours against the database). However, it throws away things like transactions, so you can't test things like rollback.
    In database mode, it just uses toplink, Another benefit of it though is that it watches all the objects created allowing an automatic cleanup of created test objects - keeping your database clean and preventing test interactions.
    We used HSQL running in memory previously, it worked fine. However, we needed to write scripts to translate our Oracle SQL into HSQL. Also, we had to override things like the data function in HSQL, and a few of our queries behaved unexpectedly in HSQL. We later abandoned it, as it became a lot of maintenance. It was about 10x faster than running against Oracle.
    Interestingly, we install oracle on all our developers machines too, tests run way faster locally, and developers feel way more comfortable experimenting with a local instance than with a shared instance.
    You can find the toplink in memory stuff at:
    http://toplink-in-mem.sourceforge.net/
    I provide all support for it. Doc is sketchy but I'm happy to guide you through stuff and help out where I can with it.
    - ted

  • Database file size

    I am using Berkeley DB 5.1.19 with replication manager.
    I am seeing big differences between the size of the db files between the master and the client, is that expected and if so what is the reason. This has impact on the size of the backup too.
    On the master:
    [root@sde1 sandvine]# du -sh replica_data/*
    16K replica_data/__db.001
    29M replica_data/__db.002
    11M replica_data/__db.003
    2.9M replica_data/__db.004
    25M replica_data/__db.005
    12K replica_data/__db.006
    2.3M replica_data/__db.rep.db
    1.1M replica_data/__db.rep.diag00
    1.1M replica_data/__db.rep.diag01
    4.0K replica_data/__db.rep.egen
    4.0K replica_data/__db.rep.gen
    8.0K replica_data/__db.reppg.db
    8.0K replica_data/__db.rep.system
    11M replica_data/log.0000000158
    7.2M replica_data/log.0000000159
    8.0K replica_data/persistency_name_mapping.tbl
    8.0K replica_data/QM_KPI_NumManagedTable1_20111117T015214.012632_backup.db
    8.0K replica_data/QM_KPI_NumOverQuotaTable2_20111117T015214.074648_backup.db
    8.0K replica_data/QM_KPI_NumUnderQuotaTable3_20111117T015214.138377_backup.db
    8.0K replica_data/QM_KPI_NumUnmanagedTable4_20111117T015214.200234_backup.db
    8.0K replica_data/QmLastIpAddressTable5_20111117T015214.258221_backup.db
    12K replica_data/QmPolicyConfiguration6_20111117T015214.316379_backup.db
    13M replica_data/QmSubIdNameTable7_20111117T015214.375543_backup.db
    41M replica_data/QmSubscriberQuota_Daily8_20111117T015214.432662_backup.db
    41M replica_data/QmSubscriberQuota_PC_or_Monthly9_20111117T015214.506866_backup.db
    41M replica_data/QmSubscriberQuota_Roaming10_20111117T015214.570525_backup.db
    15M replica_data/QmSubscriberQuotaState12_20111117T015214.717594_backup.db
    41M replica_data/QmSubscriberQuota_Weekly11_20111117T015214.634982_backup.db
    On the client:
    [root@sde2 sandvine]# du -sh replica_data/*
    16K replica_data/__db.001
    146M replica_data/__db.002
    133M replica_data/__db.003
    3.3M replica_data/__db.004
    33M replica_data/__db.005
    12K replica_data/__db.006
    8.0K replica_data/__db.rep.db
    1.1M replica_data/__db.rep.diag00
    1.1M replica_data/__db.rep.diag01
    4.0K replica_data/__db.rep.egen
    4.0K replica_data/__db.rep.gen
    8.0K replica_data/__db.reppg.db
    8.0K replica_data/__db.rep.system
    7.2M replica_data/log.0000000159
    8.0K replica_data/persistency_name_mapping.tbl
    8.0K replica_data/QM_KPI_NumManagedTable1_20111117T015214.012632_backup.db
    8.0K replica_data/QM_KPI_NumOverQuotaTable2_20111117T015214.074648_backup.db
    8.0K replica_data/QM_KPI_NumUnderQuotaTable3_20111117T015214.138377_backup.db
    8.0K replica_data/QM_KPI_NumUnmanagedTable4_20111117T015214.200234_backup.db
    8.0K replica_data/QmLastIpAddressTable5_20111117T015214.258221_backup.db
    12K replica_data/QmPolicyConfiguration6_20111117T015214.316379_backup.db
    13M replica_data/QmSubIdNameTable7_20111117T015214.375543_backup.db
    41M replica_data/QmSubscriberQuota_Daily8_20111117T015214.432662_backup.db
    41M replica_data/QmSubscriberQuota_PC_or_Monthly9_20111117T015214.506866_backup.db
    41M replica_data/QmSubscriberQuota_Roaming10_20111117T015214.570525_backup.db
    15M replica_data/QmSubscriberQuotaState12_20111117T015214.717594_backup.db
    41M replica_data/QmSubscriberQuota_Weekly11_20111117T015214.634982_backup.db
    For example:
    The following 2 files on the master are small
    29M replica_data/__db.002
    11M replica_data/__db.003
    Where on the client, the same following:
    146M replica_data/__db.002
    133M replica_data/__db.003
    Thx in advance.

    The __db.00* files are not replicated database files. They are internal Berkeley DB files that back our shared memory regions and they are specific to each separate site's database. It is expected that they can be different sizes reflecting the different usage patterns and potentially different configuration options on the master and the client database. To read more about these files, please refer to the Programmer's Reference section titled "Shared memory regions".
    I am assuming that your replicated databases are the QM* and Qm* files. These look like they are the same size on the master and client, as we would expect.
    Paula Bingham
    Oracle

  • Database file sometimes gets to zero size and zero file flags

    I have a problem that on my system sometimes single database file gets to a state where it has zero size and zero file flags ( as if set chmod 0 file ).
    My database runs 24/7 and there are multiple agents running at the same time. My database files are backed up and removed from time to time to protect stored data. So I guess that this error could come up when two agents get to backup or recover the database file at the same time. Though, it is hard to find if it is caused by this problem, so I'd like to ask if there is anyone who stumbled on same problem - where one database ends up in state of 0 flags and 0 size.

    Hello, anyone has any tip about this issue?

  • I thought I had  Adobe stuff.  I am not computer saavy at all, so I need to speak to someone who can guide me through this situation.  I thought I had an Adobe PDF File, but I guess not.  When I went to print taxes, the blank forms printed, but no the tax

    I thought I had Adobe stuff.  I am not computer savvy at all, so I need to speak to someone who can guide me through this situation.  I thought I had an Adobe PDF File, but I guess not.  When I went to print taxes, the blank forms printed, but no tax information I typed in. Please have someone call me.  My number is (909) 864-1150.  Thank you.

    Hello Bill,
    You can seek help from Adobe customer care for the same.
    Contact Customer Care
    Regards,
    Anubha

  • Misc questions - Old Toad - back up to dvd's and iphoto database files

    Old Toad
    Burning dvd's for back up.
    I see that if I command-click Event folders the size increases down where it says information in the left bottom corner. Keep it to 4gig or less for each dvd. That works for me. When doing this does it also save any keywords etc.
    I would happily go through my library this way to get a secure iphoto archive. Safest way... no?
    If I did have to restore from these, what happens... is it like re-importing them, like Import to library under file menu.
    (I haven't decided on an external back up HD system yet. Plus they can fail too. Right now I have 3 computers, they can talk via ethernet. One has an additional internal which i've been using to back up Mail, Ical, Address book, and other info like plist etc. Including my wife and 2 kids, I have 4 users to manage.)
    Thanks for any advice you have.
    ps./ I went to your site. What's up with this font - HelveticaNeue.dfont. Do I need this?
    I downloaded the iphoto db back up. Will readme to figure out. (where exactly does the iphoto database file exist)
    Thanks again.
    Dave Stamm

    David:
    If you use the Share->Burn menu option you will preserve the keywords, comments and titles.
    Burning to DVD is a good way to archive the photos for later use in iPhoto if necessary. To get those photos back into an iPhoto library just mount the disk with iPhoto open. It will show up in the left hand pane under the Events and Photos icon. Drag the entire disk icon or drag individual albums or events (not shown in the video screenshot below) onto the Events icon like this.
    However, for a way to quickly recover from a damaged library I suggest a backup copy of the library on a second hard drive. You can perform incremental backups of the library, copying only new or editied files, with a backup application like Synk. It only takes about a minute or less to perform an incremental backup. There are several other backup applications that can do incremental backups on iPhoto Library packages. Yes external HDs can fail but having both fail at the same time would be unusual unless you were to experience a power surge that would blow everything powered up. I have an external HD that I only turn on when I do a backup of specific files and folders (in addition to Time Machine), daily or every other day.
    The Helvetica Neue font is what iPhoto 8 uses for Event titles, the number of photos selected and being dragged, etc. If it is missing or deactivated you wouldn't be able to change Event names or click and drag photos. If you're not having that problem there's nothing to worry about.
    The database file resides inside the iPhoto Library package which is in the Pictures folder (default location).
    OT

  • How to stop BDB from Mapping Database Files?

    We have a problem where the physical memory on Windows (NT Kernel 6 and up, i.e. Windows 7, 2008R2, etc.) gets maxed out after some time when running our application.  On an 8GB machine, if you look at our process loading BDB, its only around 1GB. But, when looking at the memory using RAMMAP, you can see that the BDB database files (not the shared region files) are being mapped into memory and that is where most of the memory consumption is taking place.  I wouldn't care normally, as memory mapping can have performance and usability benefits. But the results are the system comes to a screeching halt.   This happens when we are inserting results in high order, e.g. 10s of millions of records in a short time frame.
    I would attach a picture to this post, but for some reason the insert image is greyed out.
    Environment open flags: DB_CREATE | DB_INIT_LOCK | DB_INIT_LOG | DB_INIT_TXN | DB_INIT_MPOOL | DB_THREAD | DB_LOCKDOWN | DB_RECOVER
    Database open flags: DB_CREATE | DB_AUTO_COMMIT

    An update for the community
    Cause
    We opened a support request (SR) to work with Oracle on the matter. The conclusion we came to was that the main reason for the memory consumption was the Windows System Cache.  (For reference, see this http://support.microsoft.com/kb/976618) When opening files in buffered mode, the equivalent of calling CreateFile without specifying FILE_FLAG_NO_BUFFERING, all I/O to a file goes through the Windows System Cache.  The larger the database file, the more memory is used to back it.  This is not the same as memory mapped files, of which Berkeley will use for the region files (i.e. the environment.) Those also use memory, but because they are bounded in size, will not cause an issue (e.g. need a bigger environment, just add more memory.)  The obvious reason to use the cache is for performance optimizations, particularly in read-heavy workloads. 
    The drawback, however, is that when there is a significant amount of I/O in a short amount of time, that cache can get really full and can result in the physical memory being close to 100% used.  This has negative affects on the entire system. 
    Time is important, because Windows needs time to transition active pages to standby pages which decreases the amount of physical memory.   What we found is that when our DB was installed on FLASH disk, we could generate a lot more I/O and our tests could run in a fraction of the time, but the memory would get close to 100%. If we ran those same tests on slower disk, while the result was the same, i.e. inserted 10 million records into the data, the time takes a lot long and the memory utilization does not approach even close to 100%. Note that we also see the memory consumption happen when we utilize the hotbackup in the BDB library. The reason for this is obvious:  In a short amount of time we are reading the entire BDB database file which makes Windows utilize the system cache for it. Total amount of memory might be a factor as well. On a system with 16GB of memory, even with FLASH disk, we had a hard time reproducing the issue where the memory climbs.
    There is no Windows API that allows an application to control how much system cache is reserved or usable or maximum for an individual file.  Therefore, BDB does not have fine grained control of this behavior on an individual file basis.  BDB can only turn on or off buffering in total for a given file.
    Workaround
    In Berkeley, you can turn off buffered I/O in Windows by specifying the DB_DIRECT_DB flag to the environment.  This is the equivalent of calling CreateFile with specifying FILE_FLAG_NO_BUFFERING.  All I/O goes straight to the disk instead of memory and all I/O must be aligned to a multiple of the underlying disk sector size. (NTFS sector size is generally 512 or 4096 bytes and normal BDB page sizes are generally multiples of that so for most this shouldn't be a concern, but know that Berkeley will test that page size to ensure it is compatible and if not it will silently disable DB_DIRECT_DB.)  What we found in our testing is that using the DB_DIRECT_DB flag had too much of a negative affect on performance with anything but FLASH disk and therefore can not use it. We may consider it acceptable for FLASH environments where we generate significant I/O in short time periods.   We could not reproduce the memory affect when the database was hosted on a SAN disk running 15K SAS which is more typical and therefore are closing the SR.
    However, Windows does have an API that controls the total system wide amount of system cache space to use and we may experiment with this setting. Please see this http://support.microsoft.com/kb/976618 We are also going to experiment with using multiple database partitions so that Berkeley spreads the load to those other files possibly giving the system cache time to move active pages to standby.

  • ORA-01119: error in creating database file

    Hi,
    I am trying to run scripts adcrtbsp.sql but it is giving following error.
    adcrtbsp.sql script contains:
    CREATE TABLESPACE
    APPS_TS_TX_DATA
    DATAFILE '?/dbf/transaction_table.dbf'
    SIZE 1000 M REUSE
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128 K
    SEGMENT SPACE MANAGEMENT AUTO ;
    ALTER TABLESPACE
    APPS_TS_TX_DATA
    add DATAFILE
    '?/dbf/transaction_table_2.dbf'
    SIZE 1000 M;
    ALTER TABLESPACE
    APPS_TS_TX_DATA
    add DATAFILE
    '?/dbf/transaction_table_3.dbf'
    SIZE 1000 M;
    CREATE TABLESPACE
    APPS_TS_TX_IDX
    DATAFILE '?/dbf/transaction_index.dbf'
    SIZE 1000 M REUSE
    EXTENT MANAGEMENT LOCAL UNIFORM SIZE 128 K
    SEGMENT SPACE MANAGEMENT AUTO ;
    ALTER TABLESPACE
    APPS_TS_TX_IDX
    add DATAFILE
    '?/dbf/transaction_index_2.dbf'
    SIZE 1000 M;
    sqlplus system/pwd @adcrtbsp.sql
    CREATE TABLESPACE
    ERROR at line 1:
    ORA-01119: error in creating database file 'f:/oracle/dbf/transaction_table.dbf'
    ORA-27040: skgfrcre: create error, unable to create file
    OSD-04002: unable to open file
    O/S-Error: (OS 3) The system cannot find the path specified.
    Anybody can guide ??

    Hi,
    Let me explain..
    You have started you session
    sqlplus system/pwd @adcrtbsp.sql
    Its trying to create here "'f:/oracle/dbf/transaction_table.dbf' "
    Try to check the path with folder whether exists or not "/dbf"
    Aman.. nice reply... !! I am late :-(
    - Pavan Kumar N

  • Managing a single Oracle Lite database file

    Hi,
    I was wondering if there's the possibility of using a single database file of Oracle Lite just like it can be done with SQL Server CE. At the moment, I'm using the SQLCE driver of .NET to manipulate my SDF file (SQLCE database for Pocket PCs) without using a SQL SERVER Merge replication; however, I was trying to change my SQLCE database to an Oracle Lite database without using the whole replication thing. I've already installed the whole Oracle Lite 10g kit but it seems it's necessary to create some DNS (that I don't fully understand :S) and that's not what I'm looking for. I hope my explanation isn't that vague and ambiguous. Thanks in advance.
    Best regards,
    César C.

    See Connection string  and DSN
    It appears that c:\windows\polite.ini c:\windows\odbc.ini need to be installed. odbc.ini must contain the DSN entry for your DB.
    Note you can create/modify these files when you install you application that uses Oracle Lite. If you happen to have an application that dynamically creates the DB then you can reuse the one DSN entry for multiple DBs. Just provide the DB location in the connection string along with the DSN reference.
    string dbpath = Path.Combine(
    Path.GetDirectoryName(System.Windows.Forms.Application.ExecutablePath), "Oracle");
    string constr = string.Format(@"DataDirectory={0};Database={1};DSN={2};uid=system;pwd=manager",
    dbpath, DB, DSN);
    OdbcConnection cn = new OdbcConnection(constr);

Maybe you are looking for

  • Original Size and Resolution (How do I access that file)

    If you go to your photo album and click 'veiw larger image' you receive this message: Note: This picture may be displayed smaller than the actual picture size. The original uploaded image size and resolution are preserved in your account. The same qu

  • Order Of Tracks - How To Sort

    I have a fifth generation 80GB and I just purchased the sound track to the movie August rush but once downloaded the tracks are in the wrong play order. In fact the first one is the last track so that when I try to play the album it is playing the la

  • For search help in web

    Hi Experts when i click the button ,the enterd input in input field will be searched with their matching words....Is it possible? Let me know if it is possible....... Please Experts Guide Me.......... Thanks & Regards Mathi

  • Chinese font print issue

    Hi I am printing statement of account in chinese font using T.code f.27. I can see chinese address in print preview. But i can not see chinese letters in Hard copy. F.27 t.code calling driver program(zprog). If i print the samr using program no issue

  • PPro CS4 crashes on my HP 6820s laptop.

    Every time, and I mean EVERY time I try to load a sequence into the timeline, CS4 blows up. This is a new computer, a HP Compaq 6820s, manufactured in February of this year. Did Adobe test this software on more than a couple of platforms? Seriously,