How to backdate Date Added date

Im trying to change my date added date because my old computer crashed, but i backed up all my music on an external harddrive, but then when I imported my music to my new computer, all the date added dates were as if I had added all those songs that day, and not when I had actually added them to my itunes library. Is there a way to back date songs without changing my system clock, adding songs and all that.

This is the iTunes for Windows forum. You appear to be asking a question about iPhoto on a Mac. You might want to repost your query in a more relevant forum. The people checking this one may not know much about what you're asking.
Maybe this one:
http://discussions.apple.com/forum.jspa?forumID=1192
Best of luck.

Similar Messages

  • Cannot backdate data plan reduction?

    Is it not possible to backdate data plan reductions?  Last month my family was traveling and using hotspots.  Used a lot of data and increased our plan to 14GB.  I did so at the end of the month, via the backdating option which allowed me to avoid overages. 
    Now its a couple weeks into my bill cycle and I am trying to reduce the data plan to a more reasonable amount.  Only thing is that I am not getting the option to backdate the reduced cost of the plan.  What gives?  Its nice that they allow us to backdate a data plan increase to avoid overages.  Why not on data plan reductions as well?  Should I not be able to reduce my data plan according to how much we use each month?  We are not computers running the same calculations each day and the amount we use varies without any pattern.  It is a nuisance to have to manage the data plan two times each month--one at the end to backdate to more data if necessary, and then one in the beginning after the next cycle begins to chose a lower plan since we are unable to backdate data plan reductions.
    Just seems like this grouped data plan thing is another way for Verizon to screw over the typical and even experienced users who may not necessarily have the time to spend on managing these things. 
    In an ideal world Verizon would charge a scaled price per MB where each MB becomes cheaper after a certain amount used, each month.  Similar to how the current plans are cheaper per MB at the top end.  The only difference in this ideal world would of course be no overage charges and no worrying about buying more data than you need.  You'd think that with all the **** money they are making (selling data is huge profit) they could at least make it easier and more fair for the customer.  Really ticks me off.

    ihaveacellphone wrote:
    Yeah there is a reason I didnt lower my data before the end of my last cycle.  Its because I had to raise it via backdating to avoid overage.  Did you read my post?
    The point is that when you go over at the end of the month you can backdate to more data.  That is fine.  But the thing is, unless you check back when the next cycle begins, you are stuck with that plan for the rest of the month / until you remember to change it to less data...
    At which point, for whatever reason, you only have the option to prorate the cheaper plan rather than backdate for a complete reduction of cost.
    "If Verizon wanted to make money on overages they'd charge $1 per GB"  Do you honestly believe that?  You must have stockholm syndrome or something.  You cannot possibly think Verizon charges $15 overage to prevent people from going over.  If someone is having overages it means they are not familiar with the tiered data system, and that will not change whether the overage is $15, $5, or $.5 per gb.  Verizon wants you to overuse your allotted data because it makes them more money, period.  There is simply no argument against this. You go over .1 gb, you are charged $15.  If they were reasonable they would charge per kb or even mb. The average user would not have to understand the difference because they would understand the correlation between how much data they overused and how much additional data $$ was added to their bill.  Your elitist disposition once again makes me consider your employment with a company so eager to **** its customers.  Maybe you have just taken it from them for so long that you have started to enjoy it.
    A) Your point still doesn't make sense. You backdated so you wouldn't go over. Seems to have worked out. Seems that is what you intended to do. The next month simply switch back to your old data plan. You should know when you billing begins. So I'm still not seeing the issue.
    B) Enough with the insults. Simple fact Verizon doesn't want people hogging the network otherwise it slows down then they'll be Sprint. $15 per GB assures more people keep tabs on their data plans. Unless they are dumb. Of course if one insists on hogging the network sure they'll gladly take the $15 per GB. At $1 per GB most people wouldn't care about going over a GB or 2 every month. Verizon would make a ton more money of those $1 or $2 than they a will on the $15. No that many people will go over at $15 per GB. FACT.
    C) As I stated you per MB or per KB charge will just lead to more confusion on bills. People are already confused even though Verizon's plans are pretty simple. The fact is if Verizon charged 1.5 cents per MB which equates to $15 per GB I am 100% positive there would be plenty of people angry that they went over by 1 GB and got charged $15 because they don't understand there is 1024 MB per GB. Here's an idea, DO NOT GO OVER. If you find you are going over, UP YOUR PLAN. here's another idea, it's called wi-fi use it. We have 5 smartphones on a 6 GB plan in which we have yet to use even 3 GB a month in the 8 months we have been on it.
    D) I do not nor have I ever worked with Verizon. I would clearly be marked as Verizon employee in my username an avatar. Just because I choose not to believe in conspiracy theories where all the corporations are part of one big cabal run by Dr Evil doesn't mean I work for these corporations. Also I do not like you using the word ****. Some of us have family members that have been raped for real and you getting billed "too much" for data is no where near the same thing. If Verizon is so bad then you are free to choose another carrier that suits your needs.

  • How to access data of all users in a page?

    Hi all, I am new to APEX and have a query.
    I have installed APEX on my local machine, created a workspace, and added a few users with roles.
    I am creating an application, similar to a Library app, where I have a table that stores Books information (BOOK_ID, BOOK_NAME, etc). I'd like to create another table BOOK_USER with two columns, BOOK_ID and USER_ID. BOOK_ID would refer to the BOOKS table, and I want USER_ID to refer to the table that stores user details of the workspace that I am using. The data entry would happen from a page that has two LOVs, one for book name (using BOOK_ID as the value) and another for user name (USER_ID, or whatever field APEX uses to store user's ID). Can I do that in APEX? I've found ways to access and authenticate a user for a page, but how to get data of all users in the workspace? In other words, how to get all usernames in a LOV?
    As I am new to APEX, I'd appreciate if someone can guide me through the steps, or post links to tutorial, etc.

    Assuming that you are using the Application Express User Accounts for authentication, in that case you have to use built-in apex views
    selet * from APEX_WORKSPACE_APEX_USERSAnd use USER_NAME column instead of user_id
    See http://docs.oracle.com/cd/E37097_01/doc/doc.42/e35125/priv_public001.htm#HTMDB25949

  • How to accelerate by partitioning drives & how to distribute data among 'em

    Dear forum,
    I have read guide to storage acceleration and guides to phototoshop acceleration, but they always warn that the best solution depends on the work i do, the hardware i have, and the hardware i think i can afford to buy. I'm hoping that if i tell you what photoshop work i do, what hardware i have, and what hardware i'm intending to buy, you can tell me how to accelerate by partitioning my drives and how to distribute data among them. My biggest questions are about how big the volumes should be, and what should go on each volume. It sounds vague here, but I get more specific below:
    THE PHOTOSHOP WORK I DO:
    *wet-mount raw scans of 6x7 cm film using silverfast software on microtek artixscan 120tf 4000dpi scanner: resulting 16-bit TIFF file is typically 550 MB in size.
    *working in Photoshop CS2 on same file, adding multiple layers makes file 1 GB to 1.4 GB in size
    *my system's limitations show up most painfully when I OPEN a file (this can take five minutes) SAVE a file (this can take more than ten minutes!), when i FLATTEN the image's layers for printing (this can take 5 minutes), and when i CONVERT the file from 16-bit to 8-bit (this can take 5 minutes). most other operations in Ps CS2 are fast enough (not snappy, but fast enough) for me to stay with my current processor for the time being.
    THE HARDWARE I HAVE:
    *Power Mac G5 dual 1.8GHz, made in 2004, with only 4 slots for RAM (not 8 slots).
    (I'm told this has quite limited bus speed, as compared with other dual-processor G5s, and that this hardware will not benefit much at all from adding a RAID array.)
    *one internal seagate 80GB 7200rpm SATA drive. this is half-full (it has 39 GB on it): it holds my OS and my Users folder, but NOT my photoshop image files.
    *one internal Western DIgital 400 GB 7200rpm SATA drive. this holds my photoshop image files, but not my user folder.(This WD drive turns out to cause the G5 to hang up occasionally, requiring a re-boot; to avoid this, i recently learned, i can connect it with a host card adapter [see below].)
    *two 500 GB external firewire drives
    *two 300GB external USB drives
    *I have 2.25 GB of RAM, but I'm about to buy 2 more GB to max out at 4GB.
    THE HARDWARE I'M INTENDING TO BUY:
    *2GB of RAM, of course.
    *two Hitachi T7K500 500 GB SATAII HD 16MB Cache 7200rpm drives to occupy both internal drive slots in the G5
    *a 2-drive external enclosure to hold my old seagate 80GB drive and my old WD400GB drive.
    *a seritek host card adaptor for connecting the external enclosure to the G5.
    THE PLAN:
    What follows is a combination of suggestions I have received about what I could do and my speculation about how I could do it. Please see my Questions, embedded in the lines below: I'd be very grateful for any amendments or directions you can offer on this topic.
    Drive A: first newly internal Hitachi 500GB drive:
    partition into 2 volumes:
    first (faster) volume, "volume A1," of 100GB to hold OS and Users folder but NOT photoshop image files.
    (Question: how much space should I leave free on volume A1 for optimum performance? is 50% free of 100GB optimal? is 60% free of 100GB better? Is 50% free of 150GB better still? or does that cut into the other volume's space too much (indirectly cutting into the space of "volume B1" on Drive B, which is to be the WorkDisk/ScratchDisk)?
    second (slower) volume, "volume A2" of remainder GB (almost 400GB) as backup for 400GB "volume B1" of the OTHER internal Hitachi Drive, a.k.a. Drive B.
    Drive B: second newly internal Hitachi 500GB drive:
    partition into 2 volumes:
    first (faster) volume, "volume B1" of almost 400GB as designated WorkDisk/ScratchDisk for large photoshop image files;
    second (slower) partition "volume B2" (exactly 100GB) as backup for 100GB volume 1 (OS volume) of the OTHER internal Hitachi Drive, a.k.a. Drive A.
    (Question: how much space should I leave free on this WorkDisk/ScratchDisk for optimum performance? is 50% free of almost 400GB optimal? is 60% free of almost 400GB better? Is 50% free of 300GB just as good, with the additional advantage of indirectly allowing "volume A1" on Drive A to be 150+GB?
    Drive C: old Seagate 80GB drive, in external enclosure: disk designated for running the Photoshop Application? How would I set this up? any pitfalls to watch out for? should i partition this drive, or leave the whole thing for Photoshop? or is it better to run photoshop off Drive D?
    Drive D: old WD 400 GB Drive: second scratch disk? Storage disk? Both storage and scratch disk? how large should an empty volume on this disk be in order to be useful as a scratch disk? volume 1 or volume 2? if i run the Photoshop Application off of this drive, how large should the volume for that be? should it be volume 1, the faster, outside volume, leaving volume 2 for scratch disk space? or vice versa?
    External Firewire and USB drives: i guess i'll just use them for storage/archiving and extra backup? or am i much safer buying more SATAs and Enclosures? or are the external firewire and USB drives plenty safe (so long as i double-back up), since i'll only power them up for the data transfer, and then power them back down?
    Given that the large Photoshop files are not in my User folder, does it matter whether i keep the User folder (with its MS Word docs and a bunch of PDFs and so on) on my OS volume, "volume A1"? would it speed things up when I'm using photoshop if i moved the Users folder to another drive? what if i'd like to play iTunes while also working on photoshop? my iTunes music folder (with all the song data) is already on an external firewire drive. but the iTunes Library and iTunes application are, of course, in my User folder, which is on the OS drive. would moving the Users folder to another drive make much difference when i use photoshop and iTunes simultaneously?
    But I wonder whether it makes sense to be using volume A2 on Drive A as a backup drive: wouldn't it make more sense to back up my working files to two external drives that can be traded out, one on-site and one off-site, back and forth (not so convenient when one of the backup drives is internal!)? and after all, why would i devote a 400GB volume to the task of backing up another 400GB volume that will never be more than half full? I need to leave a WorkDisk/ScratchDisk half empty for efficient use, but i can back up that 200GB of working files on a 200GB volume, right? so for a backup drive, I might as well use a slow, inexpensive external USB drive that will only be tuned on for backup and will then stay powered off, a drive that's easily transportable on and off site, right? or am i misunderstanding something?
    by the way, what backup software do you recommend for backing up back and forth between Drive A and Drive B? I've been using Carbon Cpy Cloner. do you recommend that? or something that does more archiving of progressive states of data?
    Thank you for any help you can offer!
    Sincerely,
    Mark Woods
    Dual 1.8 GHz PowerPC G5 (2.2), 512 KB L2 Cache per CPU, w/ 4 RAM slots   Mac OS X (10.3.9)   2.25 GB DDR SDRAM (2x128MB plus 2x1GB)

    Crossposted: Re: How to use Oracle partitioning with JPA @OneToOne reference?

  • How to get data into the mySQL database?

    First some background.
    I have a website that has outgrown its designed dimensions and is a huge burden to maintain. See PPBM5 Benchmark
    There is a lot of maintenance work involved, so I'm investigating a PHP/MySQL approach to easen the burden and to add functionality to the site. With the current Excel based structure and over 420 entries, it is cumbersome for me to maintain, but also for users to find what they need.
    A MySQL based dynamic structure is a lot easier and offers vastly more selection capabilities, like selecting only records that meet specific criteria.
    Data submission is done with a form, that contains most of the relevant data, but the drawack is that people submitting their data are often not technically inclined, give wrong answers due to a lack of understanding or making typo's. The test results are attached in one or two separate .txt files, but often they have not read the instructions correctly or did something wrong, so these attached .txt files can not be trusted automatically, they have to be checked before inclusion.
    These were my initial thoughts:
    1. Data collection:
    To avoid spending all our energy and time  on correcting typo's, getting missing data, correcting errors, I am  investigating the use of CPU-Z in Ghost mode to create a .txt or .html  file that contains all relevant hardware info we need and even more. It gives all the info we currently have, but adds  data like number of memory sticks, DDR timings, stock clock speed and  BCLK setting, video card info and VRAM size, etc.
    To see what I mean, run CPU-Z, go to the About tab and press the Save Report button and look at the results.
    This can all be done without user intervention in an automatic way, but  maybe I need to add an Auto-It file to the test to make it all run as  desired.
    If this works and I'm able to extract the relevant data from the created  file and can insert it into the database, we may be in business for the  next version of PPBM5.5 or PPBM6. It does require a modification to the instructions, making them a lot  easier, because there is less data to fill out.
    2. Data submission:
    The submission form can be simplified if  the CPU-Z data can be used. We have to create an automatic way to attach  the created .html file from CPU-Z to the submission form and we have to  streamline the Output.txt and Output-MPE.txt files to be more easily included in the 'form.lib.php' file. It  currently is manual labor and very time consuming.
    3. Adding to Database:
    I have to find a way to create database  records from the Gmail forms I receive. All incoming mail messages need  to be checked on relevancy and if relevant, need to be added  automatically to the database and then offered for approval before final inclusion in the database. Data included in the database  will then include submission date and time, Email address,  IP address  used, plus links to the files submitted and available on the website.
    4. Publication of the database:
    After approval of new records from step  3, all updates will be automatically applied to the database and  accessible for users. I do not yet intend to introduce a user account ,  requesting login before all functionality is accessible. Too much trouble and administration.
    Queries should be possible on things like CPU (check box), so include  17-920, i7-930, i7-950 but exclude i7-980X and i7-990X, Size of memory  (check box), Overclocked (boolean, yes, no), SSD as OS disk, and similar  options.
    The biggest problem is to keep the color grading and statistical  indicators (Top, D9, Q3, Med, Q1 and D1) intact on dynamically generated  queries. Say you make a query which results in 20 observations, this  should show the related colors and legends. Next query results in 48 observations and of course the color grading and legends  do need to reflect that. Question in my mind, does the RPI remain  constant, independent of the query or does that need to be recalculated  on the basis of the query?
    Next thing is to allow a user to select a specific observation and by  simply clicking on it be shown, in a separate window (detail page) or  accordion, all the CPU-Z related information about the hardware.
    The graphs, Top-20 and MPE Gains, need to be dynamically adjusted, based on the query used.
    5. Ideally, external links:
    In an ideal situation, one could link the  CPU-Z data to external price databases, looking up current prices for  CPU, memory, video card, disks, raid controller, etc. to get instant  BFTB charts, based on the query made. But that is the next step.
    Situation now:
    I have a MySQL database that is easily updated with the new submissions. Simply create a .CSV flie from the submitted forms and import that into the database. The bulk of the initial work is done.Lots remain to be done as you can see above, but that is for a later time.
    Question:
    I have this table, that needs to be filled with data in the submitted and attached files. Mr. X submitted his data and can be uniquely identified by his "Ref_ID". He attached one or two files in .TXT format with the relevant test data. These files are stored on the server with a concatenated name:
    "Ref_ID","-","filename"
    Say his Ref-ID is: 20110204-6cf5 and his submitted file is called: Output(99).txt then the file can be found on the server as
    20110204-6cf5-Output(99).txt
    I need to be able to open that comma delimited file, the contents may look like this: "439","1036","819","531" and insert these contents into the relevant record and fields.
    Graphically,
    is what I want to achieve.
    This being my first exposure to PHP/MySQL, you can imagine I'm not clear on how to go from here.
    Added complication is that I actually have 5 numbers to insert per record and two calculated fields, Total Score and RPI should be calculated fields. Haven't yet figured out how to handle calculated fields, maybe only in the PHP/HTML code and not in the database.
    I hope someone can help me.

    You do have a very complex looking site and may need several tables in mysql to handle all that data. If you knew to phpmysql I would suggest taking a look at this tutorial it will help get you started in understanding how to $_GET info from a database and also how to $_POST data to a database. I am no expert just learning myself and I found this very helpful. This is the link http://www.adobe.com/devnet/dreamweaver/articles/first_dynamic_site_pt1.html
    There are also many tutorials on Youtube to help build a CMS Content Management Site I would suggest the following: -
    http://www.youtube.com/user/phpacademy
    http://www.youtube.com/user/betterphp
    http://www.youtube.com/user/flashbuilding
    And many more on my channel here
    http://www.youtube.com/user/Whisperingonthewind
    CMS's are easier to maintain, add edit and delete content.
    I have also recently bought a Book by David Powers Training from the Source very helpful.
    Anyway hope you get it sorted.

  • HOW TO IMPORT DATA MORE THAN ONCE FROM THE SAME EXPORT DUMP FILE?

    before asking my question i'd like to mention that i'm a french spoke...
    so my english is a little bit bad. sorry for that.
    my problem is : IMPORT
    how to import data a SECOND TIME from an export dump file within oracle?
    My Export dump file was made successfully (Full Export) and then i
    tried to import datas for the first time.
    I got this following message in my logfile: I ADDED SOME COMMENTS
    Warning: the objects were exported by L1, not by you
    . importing SYSTEM's objects into SYSTEM
    REM ************** CREATING TABLESPACES *****
    REM *********************************************
    IMP-00015: following statement failed because the object already exists:
    "CREATE TABLESPACE "USER_DATA" DATAFILE 'E:\ORANT\DATABASE\USR1ORCL.ORA' SI"
    "ZE 3145728 DEFAULT STORAGE (INITIAL 10240 NEXT 10240 MINEXTENTS 1 MAX"
    "EXTENTS 121 PCTINCREASE 50) ONLINE PERMANENT"
    IMP-00015: following statement failed because the object already exists:
    "CREATE TABLESPACE "ROLLBACK_DATA" DATAFILE 'E:\ORANT\DATABASE\RBS1ORCL.ORA"
    "' SIZE 10485760 DEFAULT STORAGE (INITIAL 10240 NEXT 10240 MINEXTENTS "
    "1 MAXEXTENTS 121 PCTINCREASE 50) ONLINE PERMANENT"
    etc........
    IMP-00017: following statement failed with ORACLE error 1119:
    "CREATE TABLESPACE "L1" DATAFILE 'E:\ORADATA\L1.DBF' SIZE 1048576000 "
    "DEFAULT STORAGE (INITIAL 10240 NEXT 10240 MINEXTENTS 1 MAXEXTENTS 121 PCTIN"
    "CREASE 50) ONLINE PERMANENT"
    IMP-00003: ORACLE error 1119 encountered
    ORA-01119: error in creating database file 'E:\ORADATA\L1.DBF'
    ORA-09200: sfccf: error creating file
    OSD-04002: unable to open file
    O/S-Error: (OS 3) The system cannot find the path specified
    --->etc..........
    the drive E: with the folder E:\ORADATA didn't exist, but after
    all that i created it.
    see below, before my IMPORT statement
    REM ********************* CREATING USER *********
    REM ********************************************
    IMP-00017: following statement failed with ORACLE error 959:
    "CREATE USER "L1" IDENTIFIED BY VALUES 'A6E0DAA6865E7627' DEFAULT TABLESPACE"
    " "L1" TEMPORARY TABLESPACE "TEMPORARY_DATA""
    IMP-00003: ORACLE error 959 encountered
    ORA-00959: tablespace 'L1' does not exist
    IMP-00017: following statement failed with ORACLE error 959:
    "CREATE USER "MLCO" IDENTIFIED BY VALUES '56AC6447B7D50467' DEFAULT TABLESPA"
    "CE "MLCO" TEMPORARY TABLESPACE "TEMPORARY_DATA""
    IMP-00003: ORACLE error 959 encountered
    ORA-00959: tablespace 'MLCO' does not exist
    ETC.......
    REM ********************* GRANTING ROLES ***********
    REM ************************************************
    IMP-00017: following statement failed with ORACLE error 1917:
    "GRANT ALTER ANY TABLE to "L1" "
    IMP-00003: ORACLE error 1917 encountered
    ORA-01917: user or role 'L1' does not exist
    ETC.........
    IMP-00017: following statement failed with ORACLE error 1918:
    "ALTER USER "L1" DEFAULT ROLE ALL"
    IMP-00003: ORACLE error 1918 encountered
    ORA-01918: user 'L1' does not exist
    -- that is normal, since the creation of the
    tablespace failed !!
    REM******************************
    IMP-00015: following statement failed because the object already exists:
    "CREATE ROLLBACK SEGMENT RB_TEMP STORAGE (INITIAL 10240 NEXT 10240 MINEXTENT"
    "S 2 MAXEXTENTS 121) TABLESPACE "SYSTEM""
    IMP-00015: following statement failed because
    . importing SCOTT's objects into SCOTT
    IMP-00015: following statement failed because the object already exists:
    "CREATE SEQUENCE "EVT_PROFILE_SEQ" MINVALUE 1 MAXVALUE 999999999999999999999"
    "999999 INCREMENT BY 1 START WITH 21 CACHE 20 NOORDER NOCYCLE"
    ETC............
    importing L1's objects into L1
    IMP-00017: following statement failed with ORACLE error 1435:
    "ALTER SCHEMA = "L1""
    IMP-00003: ORACLE error 1435 encountered
    ORA-01435: user does not exist
    REM *************** IMPORTING TABLES *******************
    REM ****************************************************
    . importing SYSTEM's objects into SYSTEM
    . . importing table "AN1999_BDAT" 243 rows imported
    . . importing table "BOPD" 112 rows imported
    . . importing table "BOINFO_AP" 49
    ETC................
    . . importing table "BO_WHF" 2 rows imported
    IMP-00015: following statement failed because the object already exists:
    "CREATE TABLE "DEF$_CALL" ("DEFERRED_TRAN_DB" VARCHAR2(128),
    IMP-00015: following statement failed because the object already exists:
    "CREATE SYNONYM "DBA_ROLES" FOR "SYS"."DBA_ROLES""
    IMP-00015: following statement failed because the object already exists:
    "CREATE SYNONYM "DBA_ERRORS" FOR "SYS"."DBA_ERRORS""
    IMP-00008: unrecognized statement in the export file:
    . importing L1's objects into L1
    IMP-00017: following statement failed with ORACLE error 1435:
    "ALTER SCHEMA = "L1""
    IMP-00008: unrecognized statement in the export file:
    J
    Import terminated successfully with warnings.
    -------------------------------------b]
    So after analysing this log file, i created
    the appropriate drives and folders... as the
    import statement doesn't see them.
    E:\ORADATA G:\ORDATA etc...
    And i started to [b]IMPORT ONE MORE TIME. with:
    $ IMP73 sys/pssw Full=Y FILE=c:\temp\FOLD_1\data_1.dmp BUFFER=64000
    COMMIT=Y INDEXFILE=c:\temp\FOLD_1\BOO_idx.sql
    LOG=c:\temp\FOLD_1\BOO_log.LOG DESTROY=Y IGNORE=Y;
    after that i could not see the users nor the
    tables created.
    and the following message appeared in the log file:
    Warning: the objects were exported by L1, not by you
    . . skipping table "AN1999_BDAT"
    . . skipping table "ANPK"
    . . skipping table "BOAP"
    . . skipping table "BOO_D"
    ETC.....skipping all the tables
    . . skipping table "THIN_PER0"
    . . skipping table "UPDATE_TEMP"
    Import terminated successfully without warnings.
    and only 2 new tablespaces (originally 3) were
    created without any data in ( i check that in
    the Oracle Storage manager : the tablespaces exit
    with 0.002 used space; originally 60 M for each !!)
    so,
    How to import data (with full import option) succefully
    MORE THAN ONE TIME from an exported dump file ?
    Even if we have to overwrite tablespaces , tables and users.
    thank you very much

    The Member Feedback forum is for suggestions and feedback for OTN Developer Services. This forum is not monitored by Oracle support or product teams and so Oracle product and technology related questions will not be answered. We recommend that you post this thread to the appropriate Database forum.
    The main URL is:
    http://forums.oracle.com/forums/index.jsp?cat=18

  • How to change data type of attribute in caf

    hi.
    this is my problem.
    i been added a attribute in an entity service called emloyee. named "Birthday", use the data type "com.sap.caf.core.date".
    when i tried modify the entity service, i hit exception about "DST Daylight Saving Time".
    because of DST, the date will add an hour such like "1957/04/01 00:00:00.0" to "1957/04/01 01:00:00.0", and com.sap.caf.core.date will throw an exception.
    is anyone had same problem like me?
    how to fixed it(if passable)?
    and, how to change data type?
    thanks.

    Hi Sruthi,
    all you need to do on this is open the report go to the parameters, select the parameter that you wish to change and click "Edit". This will allow you to change thefield to a date rather than a date time.
    Regards,
    Noel

  • How to import data from CSV file with columns separated by semicolon?

    I migrate database from MS SQL 2008 to ORACLE 11g
    I export data to CSV file from MS SQL
    then I try to import it to Oracle
    several tables goes fine using Import data option in the SQL Developer
    Standard CSV files with data separated by comma were imported.
    chars, date (with format string), and integer data are imported via import wizard without problems
    the problems were when I try to import table with noninteger numbers with modal part separated by comma not by dot
    comma is the standard separator for columns in CSV file
    so I must change the standard separator to semicolon
    then the import wizard have problem to correct recognize the columns data because it use only standard CSV comma separator :-/
    In SQL Developer 1.5.3 Tools -> Preferences -> Migration -> Data Move Options
    I change "End of Column Delimiter" to ; but it doens't work
    Is this possible to change the standard column separator for import data wizzard in SQL Developer 1.5.3?
    Or maybe someone know how to import data in SQL Developer 1.5.3 from CSV when CSV colunn separator is set to semicolon?

    A new preference has been added to customize the import delimiter in main code line. This should be available as part of future release.

  • How to back date for schedule line date of Consignment Fill-Up

    Hi,
    There are available stock for the material used. In VA01 of order type KB - Consignment Fill-up, I put in a backdated schedule line date (example: 09.09.2008). However, the system automatically proposed a new schedule line date of today's date (example: 13.10.2008).
    How can I remain the backdated date with confirmed qty instead of today's date?
    Thanks.

    Hi Wooi Ling
    when system is not able to confirm the schedule lines, the system would give a proposal date on which the requirement can be met.
    in this case ,you can do one thing , you can set the indicator " Fix date and quantity " in the item schedule line. this would fix the date and the quantity for delivery and the schedules lines.
    this would also mean the customer accepts the delivery and quantity confirmed by the system, The system then passes on MRP requirements so that the confirmed date and quantity can be met.
    hope this helps
    thanks
    akasha

  • How to  load data into user tables using DIAPIs?

    Hi,
    I have created an user table using UserTablesMD object.
    But I don't have know how to load data into this user table. I guess I have to use UserTable object for that. But I still don't know how to put some data in particular column.
    Can somebody please help me with this?
    I would appreciate if somebody can share their code in this regard.
    Thank you,
    Sudha

    You can try this code:
    Dim lRetCode As Long
    Dim userTable As SAPbobsCOM.UserTable
    userTable = pCompany.UserTables.Item("My_Table")
    'First row in the @My_Table table
    userTable.Code = "A1"
    userTable.Name = "A.1"
    userTable.UserFields.Fields.Item("U_1stF").Value = "First row value"
    userTable.Add()
    'Second row in the @My_Table table
    userTable.Code = "A2"
    userTable.Name = "A.2"
    userTable.UserFields.Fields.Item("U_1stF").Value = "Second row value"
    userTable.Add()
    This way I have added 2 lines in my table.
    Hope it helps
    Trinidad.

  • How to define data-sources-alias.xml to use Oracle data source

    Hi,
    I created Oracle jdbc Data Source named "MYDS" in Visual Admin.  I added the alias = MYDS_ALIAS.  I set the Initial Connection to 2.  On the Monitor tab, I see a green line running across the screen.  The datasource MYDS is connected to the Oracle.
    Now, I would like to define "data-sources-alias.xml" to use MYDS.
    When creating data-source-aliases.xml it creates the alias by default on ${com.sap.datasource.default} which is the default DB.
    I followed the link --> How to point data-source-aliases.xml to another Datasource
    and my data-sources-alias.xml is now:
    <data-source-aliases>
         <aliases>
              <data-source-name>$</data-source-name>
              <alias>BRANCH50DS</alias>
         </aliases>
    </data-source-aliases>
    The "EAR generatation has finished successfully", but when "Deploy to J2EE Engine", I got the error below:
    1) How to define data-sources-alias.xml to use Oracle data source?
    2) I found the link (SDA Deployment error) talking about "sda-dd.xml".  How to create "sda-dd.xml" and make it use substitution variables "MYDS"
    Dec 20, 2005 3:53:41 PM /userOut/deploy (com.sap.ide.eclipse.sdm.threading.DeployThreadManager) [Thread[Deploy Thread,5,main]] INFO:
    [003]Additional log information about the deployment
    <!LOGHEADER[START]/>
    <!HELP[Manual modification of the header may cause parsing problem!]/>
    <!LOGGINGVERSION[1.5.3.7181 - 630_SP]/>
    <!NAME[C:\usr\sap\J2E\JC00\SDM\program\log\sdmcl20051220205339.log]/>
    <!PATTERN[sdmcl20051220205339.log]/>
    <!FORMATTER[com.sap.tc.logging.TraceFormatter(%24d %s: %m)]/>
    <!ENCODING[Cp1252]/>
    <!LOGHEADER[END]/>
    Dec 20, 2005 3:53:39 PM  Info: -
    Starting deployment -
    Dec 20, 2005 3:53:39 PM  Info: Loading selected archives...
    Dec 20, 2005 3:53:39 PM  Info: Loading archive 'C:\usr\sap\J2E\JC00\SDM\program\temp\temp856850022Ear.ear'
    Dec 20, 2005 3:53:40 PM  Info: Selected archives successfully loaded.
    Dec 20, 2005 3:53:40 PM  Info: Actions per selected component:
    Dec 20, 2005 3:53:40 PM  Info: Update: Selected development component '50022Ear'/'sap.com'/'localhost'/'2005.12.20.15.53.28' updates currently deployed development component '50022Ear'/'sap.com'/'localhost'/'2005.12.19.18.39.12'.
    Dec 20, 2005 3:53:40 PM  Info: Saved current Engine state.
    Dec 20, 2005 3:53:40 PM  Info: Error handling strategy: OnErrorStop
    Dec 20, 2005 3:53:40 PM  Info: Update strategy: UpdateAllVersions
    Dec 20, 2005 3:53:40 PM  Info: Starting: Update: Selected development component '50022Ear'/'sap.com'/'localhost'/'2005.12.20.15.53.28' updates currently deployed development component '50022Ear'/'sap.com'/'localhost'/'2005.12.19.18.39.12'.
    Dec 20, 2005 3:53:40 PM  Info: SDA to be deployed: C:\usr\sap\J2E\JC00\SDM\root\origin\sap.com\50022Ear\localhost\2005.12.20.15.53.28\temp856850022Ear.ear
    Dec 20, 2005 3:53:40 PM  Info: Software type of SDA: J2EE
    Dec 20, 2005 3:53:40 PM  Info: ***** Begin of SAP J2EE Engine Deployment (J2EE Application) *****
    Dec 20, 2005 3:53:41 PM  Info: Begin of log messages of the target system:
    05/12/20 15:53:40 -  ***********************************************************
    05/12/20 15:53:41 -  Start updating EAR file...
    05/12/20 15:53:41 -  start-up mode is lazy
    05/12/20 15:53:41 -  com.sap.engine.deploy.manager.MissingSubstitutionException: Missing substitution value for variable [MYDS].
                              at com.sap.engine.deploy.manager.DeployManagerImpl.makeTempEar(DeployManagerImpl.java:3727)
                              at com.sap.engine.deploy.manager.DeployManagerImpl.makeNewEar1(DeployManagerImpl.java:3695)
                              at com.sap.engine.deploy.manager.DeployManagerImpl.deployUpdateAction(DeployManagerImpl.java:523)
                              at com.sap.engine.deploy.manager.DeployManagerImpl.update(DeployManagerImpl.java:512)
                              at com.sap.sdm.serverext.servertype.inqmy.extern.EngineApplOnlineDeployerImpl.performDeployment(EngineApplOnlineDeployerImpl.java:196)
                              at com.sap.sdm.serverext.servertype.inqmy.extern.EngineDeployerImpl.deploy(EngineDeployerImpl.java:96)
                              at com.sap.sdm.serverext.servertype.inqmy.EngineProcessor.executeAction(EngineProcessor.java:224)
                              at com.sap.sdm.app.proc.deployment.impl.PhysicalDeploymentActionExecutor.execute(PhysicalDeploymentActionExecutor.java:60)
                              at com.sap.sdm.app.proc.deployment.impl.DeploymentActionImpl.execute(DeploymentActionImpl.java:186)
                              at com.sap.sdm.app.proc.deployment.controllers.internal.impl.DeploymentExecutorImpl.execute(DeploymentExecutorImpl.java:46)
                              at com.sap.sdm.app.proc.deployment.states.eventhandler.ExecuteDeploymentHandler.executeAction(ExecuteDeploymentHandler.java:83)
                              at com.sap.sdm.app.proc.deployment.states.eventhandler.ExecuteDeploymentHandler.handleEvent(ExecuteDeploymentHandler.java:60)
                              at com.sap.sdm.app.proc.deployment.states.StateBeforeNextDeployment.processEvent(StateBeforeNextDeployment.java:127)
                              at com.sap.sdm.app.proc.deployment.states.InstContext.processEventServerSide(InstContext.java:73)
                              at com.sap.sdm.app.proc.deployment.states.InstContext.processEvent(InstContext.java:59)
                              at com.sap.sdm.app.sequential.deployment.impl.DeployerImpl.doPhysicalDeployment(DeployerImpl.java:127)
                              at com.sap.sdm.app.sequential.deployment.impl.DeployerImpl.deploy(DeployerImpl.java:96)
                              at com.sap.sdm.apiimpl.local.DeployProcessorImpl.deploy(DeployProcessorImpl.java:67)
                              at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
                              at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
                              at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                              at java.lang.reflect.Method.invoke(Method.java:324)
                              at com.sap.sdm.is.cs.remoteproxy.server.impl.RemoteProxyServerImpl.requestRemoteCall(RemoteProxyServerImpl.java:127)
                              at com.sap.sdm.is.cs.remoteproxy.server.impl.RemoteProxyServerImpl.process(RemoteProxyServerImpl.java:38)
                              at com.sap.sdm.apiimpl.remote.server.ApiClientRoleCmdProcessor.process(ApiClientRoleCmdProcessor.java:81)
                              at com.sap.sdm.is.cs.session.server.SessionCmdProcessor.process(SessionCmdProcessor.java:67)
                              at com.sap.sdm.is.cs.cmd.server.CmdServer.execCommand(CmdServer.java:76)
                              at com.sap.sdm.client_server.launch.ServerLauncher$ConnectionHandlerImpl.handle(ServerLauncher.java:280)
                              at com.sap.sdm.is.cs.ncserver.NetCommServer.serve(NetCommServer.java:43)
                              at com.sap.sdm.is.cs.ncwrapper.impl.ServiceWrapper.serve(ServiceWrapper.java:39)
                              at com.sap.bc.cts.tp.net.Worker.run(Worker.java:50)
                              at java.lang.Thread.run(Thread.java:534)
    05/12/20 15:53:41 -  ***********************************************************
    Dec 20, 2005 3:53:41 PM  Info: End of log messages of the target system.
    Dec 20, 2005 3:53:41 PM  Info: ***** End of SAP J2EE Engine Deployment (J2EE Application) *****
    Dec 20, 2005 3:53:41 PM  Error: Aborted: development component '50022Ear'/'sap.com'/'localhost'/'2005.12.20.15.53.28':
    Caught exception during application deployment from SAP J2EE Engine's deploy API:
    com.sap.engine.deploy.manager.MissingSubstitutionException: Missing substitution value for variable [MYDS].
    (message ID: com.sap.sdm.serverext.servertype.inqmy.extern.EngineApplOnlineDeployerImpl.performAction(DeploymentActionTypes).DMEXC)
    Dec 20, 2005 3:53:41 PM  Info: J2EE Engine is in same state (online/offline) as it has been before this deployment process.
    Dec 20, 2005 3:53:41 PM  Error: -
    At least one of the Deployments failed -

    Hi,
    If you remove the $ and braces (as shown below) it will work.
    <data-source-aliases>
    <aliases>
    <data-source-name>MYDS</data-source-name>
    <alias>BRANCH50DS</alias>
    </aliases>
    </data-source-aliases>
    Regards,
    S.Divakar

  • How to take data dump(export) with given language set on oracle 9i database(production server) ?

    Hi,
    I am taken data dump on oracle 9i machine and ported (imported ) oracle 10g (production machine) ,But it will showing error : language set error,
    Could you tell me how to take data dump with language set.
    Regards,
    Suva

    Hi PaulM,
         Please follows the details, 
    Development server  ,It is 9i machine (I am export in this machine) and Imported on Production Server ( It is Oracle 10 g).
        When import on production server error is coming, Tis error log adding below.
    Production Databse (Language details)
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET UTF8
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_RDBMS_VERSION 10.2.0.1.0
    Development Database  Language details Details.
    NLS_LANGUAGE AMERICAN
    NLS_TERRITORY AMERICA
    NLS_CURRENCY $
    NLS_ISO_CURRENCY AMERICA
    NLS_NUMERIC_CHARACTERS .,
    NLS_CHARACTERSET UTF8
    NLS_CALENDAR GREGORIAN
    NLS_DATE_FORMAT DD-MON-RR
    NLS_DATE_LANGUAGE AMERICAN
    NLS_SORT BINARY
    NLS_TIME_FORMAT HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY $
    NLS_COMP BINARY
    NLS_LENGTH_SEMANTICS BYTE
    NLS_NCHAR_CONV_EXCP FALSE
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_RDBMS_VERSION 10.2.0.1.0
    Log file
    Connected to: Oracle Database 10g Release 10.2.0.1.0 - Production
    Export file created by EXPORT:V09.02.00 via conventional path
    import done in WE8MSWIN1252 character set and UTF8 NCHAR character set
    import server uses UTF8 character set (possible charset conversion)
    export server uses AL16UTF16 NCHAR character set (possible ncharset conversion)
    . importing JW_OR's objects into JW_OR
    . importing JW_OS's objects into JW_OS
    . importing JW_ADMIN's objects into JW_ADMIN
    . importing JW_OR's objects into JW_OR
    . . importing table                      "ACCRXNS"     234671 rows imported
    . . importing table                  "AUTHORLINKS"     790450 rows imported
    . . importing table                      "AUTHORS"      79500 rows imported
    . . importing table                       "CATSOL"      25505 rows imported
    . . importing table               "CATSOLSYNONYMS"      80045 rows imported
    . . importing table                "CHAPTERTITLES"        133 rows imported
    . . importing table                "COMPOUNDLINKS"     601785 rows imported
    . . importing table                   "CONDITIONS"     207445 rows imported
    . . importing table                     "JOURNALS"       2327 rows imported
    . . importing table                     "LANGUAGE"          0 rows imported
    . . importing table                     "MAINDATA"     234659 rows imported
    . . importing table                      "MOLDATA"     721174 rows imported
    . . importing table                   "PLAN_TABLE"          1 rows imported
    . . importing table                   "REFERENCES"     276783 rows imported
    . . importing table                        "ROLES"          2 rows imported
    . . importing table                  "RXNKEYLINKS"    1724404 rows imported
    . . importing table                  "RXNKEYWORDS"        848 rows imported
    . . importing table                  "TABLETITLES"       2400 rows imported
    . . importing table                   "TEMP_TABLE"     165728 rows imported
    . . importing table          "TEMP_WILEY_MAINDATA"     155728 rows imported
    . . importing table           "TEMP_WILEY_PDF_MAP"      16672 rows imported
    . . importing table      "TEMP_WILEY_YEAR_VOL_MAP"         42 rows imported
    . . importing table                  "WEX_ACCRXNS"       3465 rows imported
    . . importing table              "WEX_AUTHORLINKS"      14183 rows imported
    . . importing table                  "WEX_AUTHORS"      79500 rows imported
    . . importing table            "WEX_CHAPTERTITLES"        133 rows imported
    . . importing table            "WEX_COMPOUNDLINKS"      10925 rows imported
    . . importing table               "WEX_CONDITIONS"       5297 rows imported
    . . importing table                 "WEX_JOURNALS"       2327 rows imported
    . . importing table                 "WEX_LANGUAGE"          0 rows imported
    . . importing table                 "WEX_MAINDATA"       3465 rows imported
    . . importing table                  "WEX_MOLDATA"      10358 rows imported
    . . importing table               "WEX_REFERENCES"       3795 rows imported
    . . importing table              "WEX_RXNKEYLINKS"      34540 rows imported
    . . importing table              "WEX_RXNKEYWORDS"        848 rows imported
    . . importing table              "WEX_TABLETITLES"       2400 rows imported
    . . importing table           "WEX_WILEY_HTML_MAP"      17316 rows imported
    . . importing table           "WEX_WILEY_MAINDATA"       3465 rows imported
    . . importing table            "WEX_WILEY_PDF_MAP"      23925 rows imported
    . . importing table       "WEX_WILEY_YEAR_VOL_MAP"         58 rows imported
    . . importing table               "WILEY_HTML_MAP"      17316 rows imported
    . . importing table               "WILEY_MAINDATA"     234659 rows imported
    . . importing table                "WILEY_PDF_MAP"      23925 rows imported
    . . importing table           "WILEY_YEAR_VOL_MAP"         58 rows imported
    . importing JW_OS's objects into JW_OS
    . . importing table                      "ACCRXNS"       7116 rows imported
    . . importing table                   "ATMOSPHERE"         47 rows imported
    . . importing table                  "AUTHORLINKS"      33276 rows imported
    . . importing table                      "AUTHORS"       6555 rows imported
    . . importing table                       "CATSOL"       1463 rows imported
    . . importing table               "CATSOLSYNONYMS"       9370 rows imported
    . . importing table                    "CHEMICALS"      78197 rows imported
    . . importing table                "COMPOUNDLINKS"      20799 rows imported
    . . importing table                       "EXPDET"          1 rows imported
    . . importing table                    "FOOTNOTES"      77825 rows imported
    . . importing table                     "JOURNALS"          2 rows imported
    . . importing table                     "LANGUAGE"          2 rows imported
    . . importing table                     "MAINDATA"       7116 rows imported
    . . importing table                     "PATHSTEP"       7199 rows imported
    . . importing table               "PROCEDURENOTES"      77293 rows imported
    . . importing table                        "ROLES"          2 rows imported
    . . importing table                  "RXNKEYLINKS"      23096 rows imported
    . . importing table                  "RXNKEYWORDS"       1272 rows imported
    . . importing table                  "WEX_ACCRXNS"        135 rows imported
    . . importing table               "WEX_ATMOSPHERE"         47 rows imported
    . . importing table              "WEX_AUTHORLINKS"        613 rows imported
    . . importing table                  "WEX_AUTHORS"       6555 rows imported
    . . importing table                "WEX_CHEMICALS"          0 rows imported
    . . importing table            "WEX_COMPOUNDLINKS"        497 rows imported
    . . importing table                   "WEX_EXPDET"          1 rows imported
    . . importing table                "WEX_FOOTNOTES"       2184 rows imported
    . . importing table                 "WEX_JOURNALS"          2 rows imported
    . . importing table                 "WEX_LANGUAGE"          2 rows imported
    . . importing table                 "WEX_MAINDATA"        135 rows imported
    . . importing table                 "WEX_PATHSTEP"        135 rows imported
    . . importing table           "WEX_PROCEDURENOTES"       2253 rows imported
    . . importing table              "WEX_RXNKEYLINKS"        695 rows imported
    . . importing table              "WEX_RXNKEYWORDS"       1272 rows imported
    . importing JW_ADMIN's objects into JW_ADMIN
    . . importing table                     "APP_USER"         76 rows imported
    . . importing table                       "AUTHOR"      61874 rows imported
    . . importing table                     "CITATION"
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 9, maximum: 8)
    Column 1 10794
    Column 2 77
    Column 3 1
    Column 4 24
    Column 5
    Column 6 Science of Synthesis
    Column 7 Negishi, E.-i.; Takahashi, T. Science of Synthesis...
    Column 8 681–848
    Column 9 2
    Column 10
    Column 11 2002
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 10, maximum: 8)
    Column 1 10879
    Column 2 77
    Column 3 1
    Column 4 110
    Column 5
    Column 6 Comprehensive Organic Synthesis
    Column 7 Hiemstra, H.; Speckamp, W. N.; Trost, B. M.; Flemi...
    Column 8 1047–108
    Column 9 2
    Column 10
    Column 11
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 10, maximum: 8)
    Column 1 10880
    Column 2 77
    Column 3 1
    Column 4 111
    Column 5
    Column 6 Houben-Weyl Methods of Organic Chemistry
    Column 7 De Koning, H.; Speckamp, W. N.; Helmchen, G.; Hoff...
    Column 8 1953–200
    Column 9 E21b
    Column 10
    Column 11 1995
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 10, maximum: 8)
    Column 1 10904
    Column 2 77
    Column 3 1
    Column 4 135
    Column 5
    Column 6 Houben-Weyl Methods of Organic Chemistry
    Column 7 Ryu, I.; Murai, S.; de Meijere, A., Ed. Houben-Wey...
    Column 8 1985–204
    Column 9 E17c
    Column 10
    Column 11 1997
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 9, maximum: 8)
    Column 1 10905
    Column 2 77
    Column 3 1
    Column 4 136
    Column 5
    Column 6 The Chemistry of the Cyclopropyl Group
    Column 7 Tsuji, T.; Nishida, S.; Patai, S.; Rappoport, Z., ...
    Column 8 307–373
    Column 9
    Column 10
    Column 11 1987
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 10, maximum: 8)
    Column 1 10906
    Column 2 77
    Column 3 1
    Column 4 137
    Column 5
    Column 6 The Chemistry of the Cyclopropyl Group
    Column 7 Vilsmaier, E.; Patai, S.; Rappoport, Z., Eds. The ...
    Column 8 1341–145
    Column 9
    Column 10
    Column 11 1987
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 9, maximum: 8)
    Column 1 10952
    Column 2 77
    Column 3 1
    Column 4 183
    Column 5
    Column 6 Cyclopropane-Derived Reactive Intermediates
    Column 7 Boche, G.; Walborsky, H. M. Cyclopropane-Derived R...
    Column 8 117–173
    Column 9
    Column 10
    Column 11 1990
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 10, maximum: 8)
    Column 1 10958
    Column 2 77
    Column 3 1
    Column 4 189
    Column 5
    Column 6 Houben-Weyl Methods of Organic Chemistry
    Column 7 Klunder, A. J. H.; Zwanenburg, B. Houben-Weyl Meth...
    Column 8 2419–243
    Column 9 E17c
    Column 10
    Column 11 1997
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 9, maximum: 8)
    Column 1 10995
    Column 2 77
    Column 3 1
    Column 4 226
    Column 5
    Column 6 Science of Synthesis
    Column 7 Cha, J. K. Science of Synthesis 2005, 325–338.
    Column 8 325–338
    Column 9
    Column 10
    Column 11 2005
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 10, maximum: 8)
    Column 1 17123
    Column 2 82
    Column 3 1
    Column 4 13
    Column 5
    Column 6 Comprehensive Organometallic Chemistry II
    Column 7 Dushin, R. G.; Edward, W. A.; Stone, F. G. A.; Wil...
    Column 8 1071–109
    Column 9 12
    Column 10
    Column 11 1995
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 9, maximum: 8)
    Column 1 17124
    Column 2 82
    Column 3 1
    Column 4 14
    Column 5
    Column 6 Modern Carbonyl Olefination
    Column 7 Ephritikhine, M.; Villiers, C.; Takeda, T. Ed. Mod...
    Column 8 223–285
    Column 9
    Column 10
    Column 11 2004
    IMP-00019: row rejected due to ORACLE error 12899
    IMP-00003: ORACLE error 12899 encountered
    ORA-12899: value too large for column "JW_ADMIN"."CITATION"."PAGE" (actual: 9, maximum: 8)
    Column 1 17126
    Column 2 82
    Column 3 1
    Column 4 16
    Column 5
    Column 6 Transition Metals for Organic Synthesis (2nd Editi...
    Column 7 Furstner, A.; Beller, M.; Bolm, C. Eds. Transition...
    Column 8 449–468
    Column 9
    Column 10
    Column 11 2004      17712 rows imported
    . . importing table                     "FOOTNOTE"         38 rows imported
    . . importing table              "GT_STATS_REPORT"          0 rows imported
    . . importing table         "GT_VALIDATION_REPORT"          0 rows imported
    . . importing table                     "OR_USERS"          1 rows imported
    . . importing table                     "OS_USERS"          1 rows imported
    . . importing table                "PROCEDURENOTE"         70 rows imported
    . . importing table                  "QC_TRACKING"     539881 rows imported
    . . importing table                         "ROLE"          5 rows imported
    . . importing table                       "SCHEMA"          3 rows imported
    . . importing table              "TASK_ALLOCATION"     159370 rows imported
    . . importing table                     "USER_LOG"     174488 rows imported
    . . importing table                      "VERSION"          3 rows imported
    About to enable constraints...
    IMP-00017: following statement failed with ORACLE error 2298:
    "ALTER TABLE "AUTHOR" ENABLE CONSTRAINT "FK_AUTHOR_CITATIONID""
    IMP-00003: ORACLE error 2298 encountered
    ORA-02298: cannot validate (JW_ADMIN.FK_AUTHOR_CITATIONID) - parent keys not found
    Import terminated successfully with warnings.
    Regards,
    Subash

  • How does IDOC data transfer happensbetween two system?

    Hi,
    If there are data in system A and I need to transfer to system B using IDoc.
    What are the key things which will enable data transfer from system A to system.
    It would be great if someone can explain how does IDOC data gets transferred from system A to sytemB.
    Regards,
    Rachel
    Moderator Message : Search for available information or Read SAP documentation. Thread locked
    Edited by: Vinod Kumar on Apr 26, 2011 12:37 PM

    thanks for all your replies...
    let me write down what i have understood ..plz correct me if i have not followed properly..
    1) Once IDocs successfully reach R3 , a function module specific to the Idoc type is called "<b>automatically</b>" and updates database...
    2) this function module we specify in the process code while creating partner profiles
    basically all we need to do is to specify correct function module, right?
    the rest of the process ie . updation in the tables(database) is done automatically..
    Once database is updated, where can i see the added/updated records in database?
    also how do we find this function module?
    I am using MATMAS05...
    Regards,
    Roshani

  • How i get data in matrix grid from A/P Invoice

    How i get data in matrix grid from A/P Invoice before click add button

    Hi,
    Are you trying to read the data in the matrix of a system form? Then you only have to look for the matrix with the ID you can see by visualising the System Information in B1 and then directly read from the matrix cells (using columns element in matrix), the DBDataSources are not filled in until the document has been added to the database.
    There are many messages in this forum talking about how to read information in matrix item, just make a search for it.
    Hope it helps
    Trinidad.

  • How to pass data from list to internal table

    Sir,
    Please tell me how to pass data from list display to internal table. Some fields in the list display are input fields .These are added at run time and these added values should be stored in the internal table

    Follow this code sample:
    CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
       EXPORTING
        I_CALLBACK_PROGRAM                = W_REPID
        I_CALLBACK_PF_STATUS_SET          = 'PF_STATUS'
        I_CALLBACK_USER_COMMAND           = 'USER_COMMAND'
    FORM USER_COMMAND USING P_UCOMM TYPE SY-UCOMM
                            P_SELFLD TYPE SLIS_SELFIELD.
    case p_ucomm.
    when '<Some user Action>'.
      Data ref1 type ref to cl_gui_alv_grid.
    CALL FUNCTION 'GET_GLOBALS_FROM_SLVC_FULLSCR'
    IMPORTING
    E_GRID = ref1.
    call method ref1->check_changed_data
    * Your internal table will have the data
    endcase.
    ENDFORM.
    Regards,
    Ravi

Maybe you are looking for

  • Using icloud on os 10.6.8

    Can I run icloud - at least for email - on my macbook pro with os 10.6.8?  My laptop is not capable of upgrading to lion.  I had a mobile me account and have upgraded to icloud on my ipad and iphone.  But now my me account is disabled on the laptop. 

  • Pre-Computations in OIM 11gR2

    Hi Experts, I want to implement a pre-computation on a GTC already created for trusted reconciliation from another database. The pre-computation is like if Status (from trusted resource) is "T" then I need to append User Login with some Random number

  • Something's wrong when I tried to edit calendar event in OS X Mountain Lion

    I created an event with my iPhone, and it synchronized automatically by iCloud. And then, I'd like to change the date of this event from the Calendar.app in my OS X 10.8.4 by typing the month I prefer, but the mini calendar shows wrong month. I also

  • My MacBook keyboard do not work

    MaccBook KeyBoard do not working

  • Shockwave Player 11 Installation

    I gotten the following message when trying to install shockwave player 11 on a Windows XP Pro machine: Error: The file 'C:\WINDOWS\system32\Adobe\Shockwave 11\SYMCCHECKER.DLL' could not be opened. Please check that your disk is not full and that you