Update question in a large database with partitions

Hello All!
Please help me in the following issue; I'm given a database that has partitions from 0 to 63. In each partition there can be 0 to any number of a given field like "NAME" (this may the primary key I think).
How can I update a field in a row corresponding to a "NAME" at a given partition range, if I don't know which NAME belongs to which partition.
Problems: there are around a million of this NAME field and the fastest method is only acceptable. I think I make an update command with a logical condition with a partition range (x<partition<y) and has a where clause like this:
NAME like value1 or like value2.... or like value(10^6) (here I enumerate all of the NAME field) : is it a good way?
How much is the longest where statement that can be given as a query to a database?
Or is it another way to do it?
Thank you for your kind help in advance,
Balage

I have a list that contains the NAMES, like:
110
111
112
113
The table is the following
NAME OLD PART
110 0 23
111 1 23
112 1 56
The NAME list is in a txt file, the OLD field should be updated according to the NAMES, and I'm writing a perl script to do the job. A thread is given a partition (PART) interval. Like the first thread works on
the PART numbers from 1 to 3... the 2nd is on from 4 to 6.. etc.
The problem is, that the NAME list is huge and I don't know which NAMEs are in a
given PART interval.
I thought this command will do the job:
UPDATE table SET old = new where 1<= part <=3 AND name LIKE 110 OR LIKE 111 OR LIKE... stc
how long can be the where clause after "name LIKE" as I have plenty of NAME records?
Hope you can understand it, sorry for my bad English,
regards,
Balage

Similar Messages

  • Can we query through 5-6gb large database with AIR

    As it creates one single file for whole database, can we have 5-6 GB large database if an AIR application requires

    There's no arbitrary limit to the database size. It would depend on performance and the user's file system, I suspect. Only you could judge the performance aspect as it should depend on the complexity of your database and queries.

  • Updating Oracle 8.1.6 database with XML file

    Hi All,
    I need your help for the following scenario.
    I'll transfer data from 13 tables of SQL Server 6.5, into a XML
    file. That file might contain around 70 Lakhs records(rows).
    This 70 lakh records will be shared by 11 tables in Oracle 8.1.6
    database. So each table would approximately have around 6.5 lakh
    record each. How to transfer this XML file data into those 11
    tables? Is any tool available to do so?
    And before inserting that 6.5 lakh rows into a table, i need to
    clear up that table (means delete all the existing data). In
    this case, if the XML data tranfser fails, i need to rollback
    the deleted old data. I want to know about rolling back
    volumness deleted rows.
    Need your solutions on this.
    Thanx in advance.
    Rgds
    Elav.

    Hi All,
    I need your help for the following scenario.
    I'll transfer data from 13 tables of SQL Server 6.5, into a XML
    file. That file might contain around 70 Lakhs records(rows).
    This 70 lakh records will be shared by 11 tables in Oracle 8.1.6
    database. So each table would approximately have around 6.5 lakh
    record each. How to transfer this XML file data into those 11
    tables? Is any tool available to do so? SQL*Loader or XSU. Question: Why you are trying to use XML file
    instead of using plain data file?
    >
    And before inserting that 6.5 lakh rows into a table, i need to
    clear up that table (means delete all the existing data). In
    this case, if the XML data tranfser fails, i need to rollback
    the deleted old data. I want to know about rolling back
    volumness deleted rows.
    Using SQL*Loader you can specify to clean up the old data before
    insertion, but not sure about the rolling back to the deleted
    rows after insertion failure.
    You can write a small application based on XSU to finish this
    task. But note about the commit size. If you data volumn is
    large, you need to set the commit size > data. Also you probably
    need to divide the data into small pieces so as to avoid running
    out of redo log.
    Need your solutions on this.
    Thanx in advance.
    Rgds
    Elav.

  • Working with large database with multiple large tables ( orcale 11)

    Hi all,
    I'm trying to read and parse 3 tables:
    NetworkEvent - 500000 entries
    NewTrapEvent - 500000 entries
    NewTrapVlaue - 2000000 entries
    I I used 1 JDBC connection, 2 statements and 3 ResultSet( i used same statement for two queries).
    My questions are:
    ==========
    1. Is it reasonable? or should I add more connections? or more statements?
    2. Additionally when creating the Result Set I used the following parameters:
    ResultSet.TYPE_SCROLL_INSENSITIVE
    ResultSet.CONCUR_READ_ONLY
    and stmt.setFetchSize(1000)
    is it valid parameters? should the fetch be larger?
    is there any more important parameters that I missed?
    3. When setting the SQL command, I used each command to retrieve all the table entries:
    "select * from ana37.NetworkEvent order by detectiontime"
    Should I take partial entries in a loop? (for example 100000 in each loop) or is it ok for the Result Set to hold all of the entries( 2 milion)
    4. Can I clean the result set when I finished processing previous entries?
    thx
    Edited by: user9001513 on 06:59 23/11/2010

    Why don't you use a stored proc?
    Why are you ordering it?
    Should I take partial entries in a loop? Yep. Because software isn't perfect. No point in attempting to process the universe when you know it will fail sometime and it is easier to handle smaller failures than large ones (and you won't have to redo everything.)

  • Oracle Dataguard Question on Physical Standby Database with a Time Lag

    I have a Standby database (PROD_LAG) that has a delay of 24hrs. How do I check to see what Archive have applied, what is current, and what is left to do.
    There is a script but its for a Logical Standby database...
    Thanks you in advance...

    hi,
    ok a little bit more explanation.
    The GUI does work but seems to report incorrect data.
    I have carried out a switchover so the primary database is now 'offsite_emrep' and the standby database is 'office_emrep'
    The GUI how ever still reports that office_emrep is the primary database.
    I cannot add the offsite_emrep database as the host is unknown. I am however running the GUI from the Host.
    I have the following from the agent status
    [oracle@griddg bin]$ ./emctl status agent
    #Oracle Enterprise Manager 10g Release 3 Grid Control 10.2.0.3.0.
    Copyright (c) 1996, 2007 Oracle Corporation. All rights reserved.
    Agent Version : 10.2.0.3.0
    OMS Version : 10.2.0.3.0
    Protocol Version : 10.2.0.2.0
    Agent Home : /u01/app/oracle/product/10.2.0/agent10g
    Agent binaries : /u01/app/oracle/product/10.2.0/agent10g
    Agent Process ID : 19797
    Parent Process ID : 19780
    Agent URL : https://griddg.domain.net:3872/emd/main/
    Repository URL : https://griddg.domain.net:1159/em/upload
    Started at : 2007-10-01 12:35:02
    Started by user : oracle
    Last Reload : 2007-10-01 12:35:02
    Last successful upload : (none)
    Last attempted upload : (none)
    Total Megabytes of XML files uploaded so far : 0.00
    Number of XML files pending upload : 116
    Size of XML files pending upload(MB) : 27.67
    Available disk space on upload filesystem : 83.08%
    Last attempted heartbeat to OMS : 2007-10-01 12:38:08
    Last successful heartbeat to OMS : unknown
    Agent is Running and Ready
    any help is appreciated.
    rgds
    alan

  • Creating a question and answer database with un incremental question table

    Hello,
    i have a project which am suppose to create a database which will be Question and Answers.
    the Question are not to be increasing in the database but only the answers.
    exp:
    Question Table:
    Questions: have you slept well?. Do you have children?. and so on till 12 question.
    Answer Table:
    Answers : 1.yes, 2.no, 3.may be, 4.some how,
    and this Answer should be updated in the database with numbers for instant yes should be 3, no should be 0, may be should be 2 and some how should be 1.

    Related forum thread:
    http://stackoverflow.com/questions/4695227/how-to-structure-a-database-with-questions-and-answers
    Kalman Toth Database & OLAP Architect
    SQL Server 2014 Design & Programming
    New Book / Kindle: Exam 70-461 Bootcamp: Querying Microsoft SQL Server 2012

  • SAP EHP Update for Large Database

    Dear Experts,
    We are planning for the SAP EHP7 update for our system. Please find the system details below
    Source system: SAP ERP6.0
    OS: AIX
    DB: Oracle 11.2.0.3
    Target System: SAP ERP6.0 EHP7
    OS: AIX
    DB: 11.2.0.3
    RAM: 32 GB
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Please advise on this.
    Regards,
    Raja. G

    Hi Raja,
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Although 3TB DB size may not have direct impact on the upgrade process, downtime of the system may vary with larger database size.
    Points to consider
    1) DB backup before entering into downtime phase
    2) Number of Programs & Tables stored in the database. ICNV Table conversions and XPRA execution will be dependent on these parameters.
    Hope this helps.
    Regards,
    Deepak Kori

  • Preventing concurrency errors when updating a database with AJAX

    Here is a question that has arisen on my current project. What are some strategies for avoiding concurrency errors in cases where a given user could attempt to modify the same row in the database with simultaneous requests (say, that he updates one column with information with an AJAX call, then immediately submits a form that updates the same row)? In which layer(s) would this best be handled? Is the best alternative to make the AJAX updates synchronous, rather than asynchronous?

    pford68 wrote:
    Here is a question that has arisen on my current project. What are some strategies for avoiding concurrency errors in cases where a given user could attempt to modify the same row in the database with simultaneous requests (say, that he updates one column with information with an AJAX call, then immediately submits a form that updates the same row)? In which layer(s) would this best be handled? Is the best alternative to make the AJAX updates synchronous, rather than asynchronous?The database and database driver should handle that situation.

  • Problem with  large databases.

    Lightroom doesn't seem to like large databases.
    I am playing catch-up using Lightroom to enter keywords to all my past photos. I have about 150K photos spread over four drives.
    Even placing a separate database on each hard drive is causing problems.
    The program crashes when importing large numbers of photos from several folders. (I do not ask it to render previews.) If I relaunch the program, and try the import again, Lightroom adds about 500 more photos and then crashes, or freezes again.
    I may have to go back and import them one folder at a time, or use iView instead.
    This is a deal-breaker for me.
    I also note that it takes several minutes after opening a databese before the HD activity light stops flashing.
    I am using XP on a dual core machine with, 3Gigs of RAM
    Anyone else finding this?
    What is you work-around?

    Christopher,
    True, but given the number of posts where users have had similar problems ingesting images into LR--where LR runs without crashes and further trouble once the images are in--the probative evidence points to some LR problem ingesting large numbers.
    I may also be that users are attempting to use LR for editing during the ingestion of large numbers--I found that I simply could not do that without a crash occuring. When I limited it to 2k at a time--leaving my hands off the keyboard-- while the import occured, everything went without a hitch.
    However, as previously pointed out, it shouldn't require that--none of my other DAMs using SQLite do that, and I can multitask while they are ingesting.
    But, you are right--multiple single causes--and complexly interrated multiple causes--could account for it on a given configuration.

  • My computer has some serious problems, my iphoto only shows thumb size pics when I try to open them, i tried to rebuild my files from folders that had the pics in them. originally all the photos has a large delta with a question mark. also I can't back up

    my computer has some serious problems, my iphoto only shows  only shows thumb size pics when I try to open them, i tried to rebuild my files from folders that had the pics in them. originally all the photos had a large delta with a question mark. also I can't back up the library file because its not there. I went to time machune and tried to find the file but I can't find it or I am looking in the wrong place. I also lost my Idvd file, only have broken chain showing.

    Details please
    What version of iPhoto and of the OS?
    i tried to rebuild my files from folders that had the pics in them.
    Exactly what did you do and how did you do it? this ay be the cause of your issue but without details we can n=ony guess
    my iphoto only shows  only shows thumb size pics when I try to open them,
    where do you see htis? In the iPhoto window? what does "try to pen them" exactly mean?
    originally all the photos had a large delta with a question mark.
    Ok - this usually has a simple solution - do you still have a copy of the library that has this problem?
    also I can't back up the library file because its not there.
    This makes no sense at all - all of your previous statements indicate that you do have an iPhoto library but have some problems with it
    By default your iPHoto library is located in your Pictures folder and is named iPhtoo library - if tha tis not the case the you have moved or renamed it and only you know what you did until you tell us the details
    I went to time machune and tried to find the file but I can't find it or I am looking in the wrong place.
    Again unless you actually share what and how you are doing thing but continue to simply state abstract problems it is no possible to assist you - details on using Time Machine are here  --  http://support.apple.com/kb/HT1427?viewlocale=en_US&locale=en_US   --     and   --    http://pondini.org/TM/FAQ.html   ---
    I also lost my Idvd file, only have broken chain showing.
    This would be better addressed in the iDVD forum - but again unlesss yu share detailed information no one can assist
    LN

  • I have an error message "page_bottom_overlay-2.png" which comes up when I try to publish.  There is a large X with a Question mark in the middle over the entire background of the page.  The error message says the file is missing.  I did not delete files.

    I have an error message "page_bottom_overlay-2.png" which comes up when I try to publish.  There is a large X with a Question mark in the middle over the entire background of the page.  The error message says the file is missing.  I did not delete files.  How can I find files that seem to be missing?

    This is probably one of the files that is required by the template you are using.
    These files are inside the iWeb app. Control click the iWeb app icon and select "Show package contents".
    You need to dig down through the folders and files to find what you want...
    Contents/Resources/da.lproj/Templates/
    If the file is missing you would need to re install the iWeb app...
    http://www.iwebformusicians.com/iWeb/iWeb-Tips.html

  • How to update database with a select box

    I'm hoping someone can/will help me with a new feature I'm
    trying to add to my web site. I'll summarize what I'm working with
    and then proceed to what I'd like to do. This is the web site I'm
    working on: www.truckerstoystore.net
    I have a database for the Truck of the Week set up with the
    information that is output on each page in the left column and
    on the Truck of the week page itself. I add new Truck of the
    Week (TOW) entries via a form I've put together. Right now, in
    order to change the current TOW, I have to manually go into my
    template and change the ID (which is automatically assigned when
    the record is created, and thus makes it unique) in my SQL which
    currently reads
    SELECT *
    FROM truckofweek
    WHERE ID="4"
    to the ID of the current TOW.
    What I want to be able to do is create a new form that will
    allow me to select the TOW entry that I want to be displayed from a
    select box (drop down box). I have a good idea of how to populate
    my select box, but don't know how to get it to work. My first idea
    was to update the table in the database (Access) manually with a
    new column called currentTOW, with values set to a Yes/No type,
    with default values set to "No". Then I would use my form to set
    the value for one of them to "Yes" so I could set my SQL to 'WHERE
    currentTOW ="yes" ' I would also make a <cfif> statement that
    checks for entries marked "Yes" and changes them to "No" when the
    form loads to avoid setting 2 entries to "Yes" and my page thus
    attempting to load 2 TOW entries.
    My problem is, I don't know how to do any of this. I hope
    I've described this situation well enough. I know there has to be
    at least one guru on here that can help me. Any assistance would be
    greatly appreciated.

    Hey,
    Thanks for replying. That sounds like a good idea, but I
    don't know how to do it.
    I started working on a new idea, where I have a second table
    in the same datasource set up called currentTOW, with one field
    called currentTOW and only one record. The idea is to send the
    string of the "owner" field from the select box, which is populated
    from the table called truckofweek to this one cell in the
    currentTOW table.
    This way, in my page which will display the data, I have 2
    querys. The first pulls the data in the single cell from currentTOW
    and outputs its string into the second query. I've attached the
    code below. I get an error when I try to display the page, see the
    error here:
    www.truckerstoystore.net/currentTOW2.cfm.
    However, the SQL looks like I want it to look, as "WHERE Owner =
    Craig Carp" is essentially the same record that it displays now in
    the live page "/currentTOW.cfm" only the SQL currently reads "WHERE
    ID = 4" (4 and Craig Carp are part of the same record).
    Here is the link to my form:
    Form

  • REUSE_ALV_GRID_DISPLAY - updating the database with new values

    Hi,
    I am using the function module 'REUSE_ALV_GRID_DISPLAY' to display records. I have managed to open a field for input/edit mode. Once this data has been changed , where will this data be? I have checked the internal table - no joy. I need to use this new/changed data to update the database with the new values.
    Thanks,
    Leanne

    Hi,
    The data is stored in table tab. After the changes the data does not reflect in table tab. Where can I get the data so that I can update my database?
    I have coded the process as follows:
        when '&SUSPEND'.
          loop at tab
            where box = 'X'.
              update zzzz
                set status       = 'SP'
          endloop.
    Regards,
    Leanne

  • Constantly inserting into large table with unique index... Guidance?

    Hello all;
    So here is my world. We have central to our data monitoring system an oracle database running Oracle Standard One (please don't laugh... I understand it is comical) licensing.
    This DB is about 1.7 TB of small record data.
    One table in particular (the raw incoming data, 350gb, 8 billion rows, just in the table) is fed millions of rows each day in real time by two to three main "data collectors" or what have you. Data must be available in this table "as fast as possible" once it is received.
    This table has 6 columns (one varchar usually empty, a few numerics including a source id, a timestamp and a create time).
    The data is collect in chronological order (increasing timestamp) 90% of the time (though sometimes the timestamp may be very old and catch up to current). The other 10% of the time the data can be out of order according to the timestamp.
    This table has two indexes, unique (sourceid, timestamp), and a non unique (create time). (FYI, this used to be an IOT until we had to add the second index on create time, at which point a secondary index on create time slowed the IOT to a crawl)
    About 80% of this data is removed after it ages beyond 3 months; 20% is retained as "special" long term data (customer pays for longer raw source retention). The data is removed using delete statements. This table is never (99.99% of the time) updated. The indexes are not rebuilt... ever... as a rebuild is about a 20+ hour process, and without online rebuilds since we are standard one, this is just not possible.
    Now what we are observing is that the inserts into this table
    - Inserts are much slower based on a "wider" cardinality of the "sourceid" of the data being inserted. What I mean is that 10,000 inserts for 10,000 sourceid (regardless of timestamp) is MUCH, MUCH slower than 10,000 inserts for a single sourceid. This makes sense to me, as I understand it that oracle must inspect more branches of the index for uniqueness, and more different physical blocks will be used to store the new index data. There are about 2 million unique sourceId across our system.
    - Over time, oracle is requesting more and more ram to satisfy these inserts in a timely matter. My understanding here is that oracle is attempting to hold the leafs of these indexes perpetually buffers. Our system does have a 99% cache hit rate. However, we are seeing oracle requiring roughly 10GB extra ram per quarter to 6 months; we're at about 50gb of ram just for oracle already.
    - If I emulate our production load on a brand new, empty table / indexes, performance is easily 10x to 20x faster than what I see when I do the same tests with the large production copies of data.
    We have the following assumption: Partitioning this table based on good logical grouping of sourceid, and then timestamp, will help reduce the work required by oracle to verify uniqueness of data, reducing the amount of data that must be cached by oracle, and allow us to handle our "older than 3 month" at a partition level, greatly reducing table and index fragmentation.
    Based on our hardware, its going to be about a million dollar hit to upgrade to Enterprise (with partitioning), plus a couple hundred thousand a year in support. Currently I think we pay a whopping 5 grand a year in support, if that, total oracle costs. This is going to be a huge pill for our company to swallow.
    What I am looking for guidance / help on, should we really expect partitioning to make a difference here? I want to get that 10x performance difference back we see between a fresh empty system, and our current production system. I also want to limit oracles 10gb / quarter growing need for more buffer cache (the cardinality of sourceid does NOT grow by that much per quarter... maybe 1000s per quarter, out of 2 million).
    Also, please I'd appreciate it if there were no mocking comments about using standard one up to this point :) I know it is risky and insane and maybe more than a bit silly, but we make due with what we have. And all the credit in the world to oracle that their "entry" level system has been able to handle everything we've thrown at it so far! :)
    Alright all, thank you very much for listening, and I look forward to hear the opinions of the experts.

    Hello,
    Here is a link to a blog article that will give you the right questions and answers which apply to your case:
    http://jonathanlewis.wordpress.com/?s=delete+90%25
    As far as you are deleting 80% of your data (old data) based on a timestamp, then don't think at all about using the direct path insert /*+ append */ as suggested by one of the contributors to this thread. The direct path load will not re-use any free space made by the delete. You have two indexes:
    (a) unique index (sourceid, timestamp)
    (b) index(create time)
    Your delete logic (based on arrival time) will smatch your indexes as far as you are always deleting the left hand side of the index; it means you will have what we call a right hand index - In other words, the scattering of the index key per leaf block is certainly catastrophic (there is an oracle iternal function named sys_op_lidbid that will allow you to verify this index information). There is a fairly chance that your two indexes will benefit from a coalesce as already suggested:
               ALTER INDEX indexname COALESCE;This coalesce should be investigated to be done on a regular basis (may be after each 80% delete) You seem to have several sourceid for one timestamp. If the answer is yes you should think about compressing this index
        create index indexname (sourceid, timestamp) compress;     
    or
        alter index indexname rebuild compress;     You will do it only once. Your index will have a smaller size and may be more efficient than it is actually. The index compression will add an extra CPU work during an insert but it might help improving the overal insert process.
    Best Regards
    Mohamed Houri

  • How can we suggest a new DBA OCE certification for very large databases?

    How can we suggest a new DBA OCE certification for very large databases?
    What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
    The largest databases that I have ever worked with barely over 1 Trillion Bytes.
    Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
    I could guess that maybe some of the following topics of how to configure might be on it,
    * Partitioning
    * parallel
    * bigger block size - DSS vs OLTP
    * etc
    Where could I send in a recommendation?
    Thanks Roger

    I wish there were some details about the OCE data warehousing.
    Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
    Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
    Overview of Data Warehousing
      Describe the benefits of a data warehouse
      Describe the technical characteristics of a data warehouse
      Describe the Oracle Database structures used primarily by a data warehouse
      Explain the use of materialized views
      Implement Database Resource Manager to control resource usage
      Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
    Parallelism
      Explain how the Oracle optimizer determines the degree of parallelism
      Configure parallelism
      Explain how parallelism and partitioning work together
    Partitioning
      Describe types of partitioning
      Describe the benefits of partitioning
      Implement partition-wise joins
    Result Cache
      Describe how the SQL Result Cache operates
      Identify the scenarios which benefit the most from Result Set Caching
    OLAP
      Explain how Oracle OLAP delivers high performance
      Describe how applications can access data stored in Oracle OLAP cubes
    Advanced Compression
      Explain the benefits provided by Advanced Compression
      Explain how Advanced Compression operates
      Describe how Advanced Compression interacts with other Oracle options and utilities
    Data integration
      Explain Oracle's overall approach to data integration
      Describe the benefits provided by ODI
      Differentiate the components of ODI
      Create integration data flows with ODI
      Ensure data quality with OWB
      Explain the concept and use of real-time data integration
      Describe the architecture of Oracle's data integration solutions
    Data mining and analysis
      Describe the components of Oracle's Data Mining option
      Describe the analytical functions provided by Oracle Data Mining
      Identify use cases that can benefit from Oracle Data Mining
      Identify which Oracle products use Oracle Data Mining
    Sizing
      Properly size all resources to be used in a data warehouse configuration
    Exadata
      Describe the architecture of the Sun Oracle Database Machine
      Describe configuration options for an Exadata Storage Server
      Explain the advantages provided by the Exadata Storage Server
    Best practices for performance
      Employ best practices to load incremental data into a data warehouse
      Employ best practices for using Oracle features to implement high performance data warehouses

Maybe you are looking for

  • SAP EP & ActivityReport

    Hi, I have integrated the ActivityReport component in the SAP EP 6.0. This ActivityReport is giving statistics on the number of users and is connected to a Windows Service Aggregator running on MS SQL Server 2000. The problem is that I get an error i

  • How to keep socket alive

    Dear All Java Devotees.... I was stucked upon the following socket programming challenge in my projects first of all I have one Server and one Client what I wanted initially was client registered to server when the client abruptly terminated, it wont

  • Import image in Map Builder

    When using the import image tool of Map Builder to create a GeoRaster theme, you may get a popup window with an error message saying that the import was not successful. The application console should contain the following lines. Creating GeoRaster Ta

  • Is iphone 5 support 4g in India..?

    Is iphone 5 support 4g in India..?

  • The dreaded folder icon

    Here's the complete story. I've tried looking through everyone's advice on similar topics but it doen't seem to work, and I'm very confused now. Got an Ipod Nano 4GB for Christmas. Installed the itunes software on my PC Imported a couple of CDs onto