Some doubt about Bdb XML Normal Recovery

Hi, everyone
I have read the document Getting Started with Transaction Processing for Java shipped with Bdb XML 2.4.13. In the book, there is something about Normal Recovery:
Normal recovery is run only against those log files created since the time of your last checkpoint.To test this, I have designed a scenario as below:
The ENVIRONMENT directory is under E:/bdb-xml/environment, and the BACKUP directory is under E:/bdb-xml/backup, the CONTAINER name is entry.dbxml, and there is already a document 1.xml in this container.
1. run db_recover against ENVIRONMENT.
2. copy entry.dbxml to BACKUP.
3. create a document 2.xml.
4. run checkpoint against ENVIRONMENT.
5. modify document 1.xml.
6. run checkpoint against ENVIRONMENT.
7. copy log.0000000001(there is only one log file in ENVIRONMENT) to BACKUP, Note that I didn't copy the entry.dbxml in ENVIRONMENT.
8. run db_recover agaist BACKUP(now there are 2 files : entry.dbxml, log.log.0000000001).After that, I used the BACKUP as environment directory, and try to query 2.xml. And I retrieved the document correctly, which I feel very curious. As the document says, the last checkpoint is created by step 6, after that, there is no other modifications happens, so the modification happened at step 3 and step 5 will not take effect when db_recover executed. But the two changes have committed to entry.dbxml.
So, which is the last checkpoint. And what is the those log files created since the time of your last checkpoint.
I also want to know where the checkpoint be writen, in the db files or the log files.
thanks advance.
Regards,
John Kao.

Jhon,
You really do want to know the gory details don't you? :-)
Running recovery in your backup directory will cause the container there to pick up all changes from the log file that it does not yet have. The checkpoint on the original container doesn't mean anything to the backup container.
Let me point you to even more interesting documentation that is in the Berkeley DB documentation set. This page has all of the BDB documentation, including links that are not included in the BDB XML doc:
http://www.oracle.com/technology/documentation/berkeley-db/db/index.html
The "Getting Started with Transaction Processing" documents on that page have the sort of information you seem to want.
Regards,
George

Similar Messages

  • Some Doubts about Event Handlers

    Hi,
    I had some doubts on Event handlers in OIM 11.1.1.5 ........
    1) I want to use the same event handler for both Post Insert and Post update task.... Can I use the same event handler for that... If yes then how can I do that....
    2) Can I create the single Plugin.xml class and add the all jar files in single say lib folder and zip them all together.. if yes then What changes I need to do?? Need to add only the plugin tags for different class files in plugin.xml file? OR need to do some thing extra also...?
    3) If i need to change any thing in any class of event handler.. Is there need to unregister the plugin and again register...??
    If yes.... Is there need to delete the event handler using the weblogicDeleteMetadata command???
    4) As we Import the event handler from path like event handler/db/... If we add all the evetn handler.xml files in that folder..... As During Import weblogicImportMetadata recursively call all the files in that folder.... Now if i need to change anything in any one of event handler class... then if we import from the same folder event handler/db/... What will it do............Create the duplicate copy of all the eventhandlers????? OR i need to add only those Eventhandler.xml files for those class files i made the changes.....
    5) As I need to create email on user creation during recon and also email id get updated as first name or last name updates..... What I had to use in Event handler.xml (entity-type="User" operation="CREATE") or Some thing else....
    Help me clarify my doubts...

    Anil Bansal wrote:
    Hi,
    I had some doubts on Event handlers in OIM 11.1.1.5 ........
    1) I want to use the same event handler for both Post Insert and Post update task.... Can I use the same event handler for that... If yes then how can I do that....Yes, you can have the same. Just have two event handlers in the same MDS file and the operation should be CREATE for one while MODIFY for another. The class and version and name remains the same.
    2) Can I create the single Plugin.xml class and add the all jar files in single say lib folder and zip them all together.. if yes then What changes I need to do?? Need to add only the plugin tags for different class files in plugin.xml file? OR need to do some thing extra also...?Yes, in the single plugin xml you can define multiple eventhandlers and the jar will contain multiple event handlers class.
    3) If i need to change any thing in any class of event handler.. Is there need to unregister the plugin and again register...??
    If yes.... Is there need to delete the event handler using the weblogicDeleteMetadata command???No, if you are just changing the class, then you need to update the class only in the plugin. For this, first delete plugin and then update plugin and then purgecache.
    4) As we Import the event handler from path like event handler/db/... If we add all the evetn handler.xml files in that folder..... As During Import weblogicImportMetadata recursively call all the files in that folder.... Now if i need to change anything in any one of event handler class... then if we import from the same folder event handler/db/... What will it do............Create the duplicate copy of all the eventhandlers????? OR i need to add only those Eventhandler.xml files for those class files i made the changes.....If won't create duplicate copies but would overwrite the ones which are there in MDS at the same location. So effectively, if the xml is not changing you should not be worried about overwritting.
    5) As I need to create email on user creation during recon and also email id get updated as first name or last name updates..... What I had to use in Event handler.xml (entity-type="User" operation="CREATE") or Some thing else....For recon and event handler, you will need to have post process event handler on User CREATE and UPDATE. On Create construct the email address and populate it in the email field. For update check if the firstname/lastname are changing and if yes, then update the email id on the profile.
    >
    Help me clarify my doubts...

  • Confused about transaction, checkpoint, normal recovery.

    After reading the documentation pdf, I start getting confused about it's description.
    Rephrased from the paragraph on the transaction pdf:
    "When database records are created, modified, or deleted, the modifications are represented in the BTree's leaf nodes. Beyond leaf node changes, database record modifications can also cause changes to other BTree nodes and structures"
    "if your writes are transaction-protected, then every time a transaction is committed the leaf nodes(and only leaf nodes) modified by that transaction are written to JE logfiles on disk."
    "Normal recovery, then is the process of recreating the entire BTree from the information available in the leaf nodes."
    According to the above description, I have following concerns:
    1. if I open a new environment and db, insert/modify/delete several million records, and without reopen the environment, then normal recovery is not run. That means, so far, the BTree is not complete? Will that affact the query efficiency? Or even worse, will that output incorrect results?
    2. if my above thinking is correct, then every time I finish commiting transactions, I need to let the checkpoint to run in order to recreate the whole BTree. If my above thinking is not correct, then, that means that, I don't need to care about anything, just call transaction.commit(), or db.sync(), and let je to care about all the details.(I hope this is true :>)
    michael.

    http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/chkpoint.html
    Checkpoints are normally performed by the checkpointer background thread, which is always running. Like all background threads, it is managed using the je.properties file. Currently, the only checkpointer property that you may want to manage is je.checkpointer.bytesInterval. This property identifies how much JE's log files can grow before a checkpoint is run. Its value is specified in bytes. Decreasing this value causes the checkpointer thread to run checkpoints more frequently. This will improve the time that it takes to run recovery, but it also increases the system resources (notably, I/O) required by JE.
    """

  • Miscellanous questions about BDB XML

    Hi !
    I'm in search for a storage solution for a Matlab app that manipulates big volumes of datas (several Gb), and so can't load them fully in memory without crashing. I also can't load / unload them each time I need a bit of these data, since it is rather long to load a file in memory (about 0.12s). So I was thinking about using a DMB like BDB XML, and I have a few questions about it :
    <ul><li>What about performances to create a 3-5Gb database in a single batch ?</li>
    <li>What about performances to excecute a XQuery request on a db this large ? Longer or shorter than loadin directly the file in memory ? With an index or without ?
    </li>
    <li> No matlab integration is provided, so I have to way : use Matlab C integration to make an interface to use BDB XML, or using the shell via an exec like command to interact with BDB ? Is the shell trick performant ? Or does it spend a lot of time parsing the input ?</li>
    </ul>
    Thanks for those who will take a bit of their precious time to answer my questions !

    Hello,
    I'm in search for a storage solution for a Matlab app that manipulates big volumes of datas (several Gb), and so can't load them fully in memory without crashing. I also can't load / unload them each time I need a bit of these data, since it is rather long to load a file in memory (about 0.12s). So I was thinking about using a DMB like BDB XML, and I have a few questions about it :
    <ul><li>What about performances to create a 3-5Gb database in a single batch ?</li>It will take a while. If you bulk load you should avoid using transactions and sync/exit the environment when you are done. Note that you should determine what indexes you might want/need before doing the load and create them. Reindexing 5GB of data will take another really large chunk of time. I recommend experimentation with indexes, queries and a small representative subset of the data.
    Be sure to create a node storage container with nodes indexed.
    Is this one document or many? Many is better. One 5Gb document is less than ideal but will work.
    <li>What about performances to excecute a XQuery request on a db this large ? Longer or shorter than loadin directly the file in memory ? With an index or without ?You really need indexes. The query will likely succeed without indexes but depending on the query and the data could take a very long time. See above on experimentation first.
    </li>
    <li> No matlab integration is provided, so I have to way : use Matlab C integration to make an interface to use BDB XML, or using the shell via an exec like command to interact with BDB ? Is the shell trick performant ? Or does it spend a lot of time parsing the input ?</li>There is no C interface, just C++. I would not recommend using the dbxml shell for this although you could if you really need to.
    Let the group know how this turns out.
    Regards,
    George

  • Doubt about actions.xml with actions and roles

    Hi all,
    we are using a file like actions.xml for use them in Web Dynpro applications describing actions like:
    Is it possible to describe GROUPs assigning roles to them in the same XML instead of doing this using the useradmin application? We need to describe the roles in the XML because we are using around 25 ROLEs and 15 GROUPs.
    We appreciate if you can show us the complete description with an example for defining those GROUPs in the XML with all the tags and properties neccesary.
    Thanks in advance.
    Raú

    This feature is one of the hidden features SAP has for deploying stuff to NW. I'm sure there is a way for that, but its not documented, as the role extension is also not documented. I don't know why SAP is hidding this extremly useful features to normal developers. Especially for product development they are so usefull.
    Did you know, that its possible to deploy database content (not just tables!) with a special DC and an XML file in a special format? Just another example of hidden features in SAP Netweaver.

  • Hi all,i have some doubts about dis scenario

    plz explain step by  step about this scenario·
         1)Involved in two activities Monitoring and Production Support.
    ·     2)Actively and regularly involved in Load Monitoring of Daily, Weekly, and Monthly, Data Loads using Process Chains
    ·     3)Actively involved in Rectification of Load Failure Errors like Master Data Loads, Transaction Loads.
    ·     4)Supported the Client by providing Long-term solutions to the Tickets by doing Root Cause Analysis.
    ·     5)Monitoring of Info Packages and analyzing the reasons for frequent failures of Info Packages.

    Hi,
    Here, there are some responsibilities in Production System.
    Answer for fist question is maintaining documentation (in ms word ) for all support related activities like
    a) frequently occurred data load monitoring errors as part of process chains
    b) how to analyze whether load is running for long time
    c) what ever the tickets with proper priority we have to raise. ex: if a load cause affecting more reports , we have to raise p2 ticket . Depends upon the impact of loads on report availability.
    Answer for second question :
    In prod.support, we have to monitor info package groups or process chains because loads are automatized using pchains or ipgroups. . whenever a load failed as part of pchains or ipg's , we have to analyse the load why it is failed. Loads may be failed due to reasons like memory issues, attribute change run locking issues and sid issues etc.
    Answer for third questions:
    In perigreen ticketing tool, we can check tickets like system generated tickets and customer generated tickets . For example,. Tivoli tickets are system generated tickets because whenever pchain is not triggered or any process of pchain has failed, system automatically will raise the tickets.
    Custom generated tickets means as part of daily loads, any failures happened, with proper priority ,we will raise the ticket to appropriate resolution groups. ( Incident management tickets )
    Rms means Request management tickets are requests raised by client. For example if they want adhoc loads on the part of mexico markets, they will raise the Rm and sent it to the offshore queue.
    Answer for fourth question:
    We can maintain excel sheet, it comprises of what are the loads have failed with the error information as part of process chains or ipg's and maintain the logs for manual loads
    daily how many loads are failed ,successful and what are the total loads , can check in RSMO.That information we can maintain in excel sheet.
    Root cause Analysis is nothing but giving permanent solution to the frequent errors.
    Ali.

  • Some doubt about use of

    Hello everyone,
    I am newbie in java programming and need some help. Recently i was just viewing some code of existing open source java projects and i found that there was a use of the < > braces while defining a class, interface, constructor, and objects. As i am a student i do not know that much about this syntax and am not very friendly with data structures if they have any concern with it. i use Java 2 complete reference for learning java and have not found use of such syntax until now but would like to know its use what is it called as and anything more if you know about it such as advantage etc hence am seeking forward for helpfrom you all
    if anybody has any links to tutorials that teach such syntax or any books please dont forget to tell it
    thanks in advance

    by the way one thing i would like to know is that for
    learning generics do i have to have knowledge of data
    structures, because i am not familiar with data
    structures and i found the reference of collections
    framework i just thought i would have to learn DS
    anyways plz tell me what you think
    thanks in advanceno it is not necessary to know about data structures at all while using generics.

  • Some doubts about XMLP 5.6.2

    hi,
    a question : it can or must be installed on separate $ORACLE_HOME ? i have a 10g instance (**no** APPS)
    and a problem : i have installed it on a separate OHOME , create a user with all XML Roles, except admin, and it occurs a "security violation error" when i try to create a folder.
    any tip ?
    thanks,
    Sandro.

    Hi Brett,
    yes, i understand that is the first version without APPS dependency.
    like i said, i installed it in a separated OHOME, but the install guide is not very clear about this, but i believe that is no problem.
    any hints about the error ?
    thanks,
    Sandro.

  • Some doubts about RAID 5

    Hello Joe,
    Yes, I have a backup!
    I'm not really sure if there was a hot-spare disk or not. I assumed there wasn't because I looked all the controller logs and I didn't see any reference to a hot-spare drive. But, I have to say, the logs of this cabinet lack a lot of information.
    So, if you tell me that RAID5 doesn't have any weird "rebuild" functionality when a drive is missing (except in case there's a hot spare drive), then probably there was a hot-spare disk. I will try to contact the former IT guy in case he remembers how he configured that cabinet.
    Thanks for your reply!

    Hello everyone!I started a new job in a company who spent 6 months without any IT worker. I'm still trying to discover how the entire infrastructure was built. Obviously, there's a terrible lack of documentation, but at least I have no pressure from my workmates.Anyway, the thing is that the biggest storage cabinet (the one that holds the user's home directories and other scientific data) is based on a RAID 5 (Bad!) on a Infortrend cheap hardware (Very bad!). The web-interface sucks, as usual, and it looks that like a blackbox where you can't touch anything. Just a week after my arrival, one of the drives failed. I removed that drive and already ordered a new one (actually two, just in case). The weird thing is, once I removed the bad drive, and without adding the new one (still not delivered), the controller started to "Rebuild" the...
    This topic first appeared in the Spiceworks Community

  • Some doubts about Combo Drivers onto T60 and T61.

    Hello guys,
    I am own of T60 (200742u) and T61 (7663b93) ThinkPad and I would like to know if I can use combo driver (code 39T2737) that it came with my ThinkPad T60 to newer ThinkPad T61. Is it compatible?
    Thanks in advance.
    Ricardo
    ThinkPad T60 and ThinkPad R61
    Dell Vostro 1400 and HP Pavilion DV2100t
    Atom Life and HTC 710
    Nokia 5610 and SE 750i

    R6x use Ultrabay Enhanced drive which is a thicker version of the Ultrabay Slim drive in the T4x, T6x systems, they all have the same shape and connector layout, just different thickness. You can use ultrabay slim drive in R6x system, but not the other way around (the enhanced drive won't physically fit into the slim slot). If you don't use a Ultrabay slim to Enhanced drive adapter, there will be a small space left between the slim drive and the Ultrabay enhanced drive bay in the R61, while if you don't move the laptop it will be okay, otherwise the slim drive may fall out.

  • Some doubts about automatically created search feature

    Hi everybody,
    if I create a report page apex provides me a basic search feature (a single textbox the user can use to search in various fields).
    The code automatically created is (in apex 2.1.x) something like:
    SELECT ...field lists...
    FROM tablename
    WHERE
    instr(upper("FIRST_NAME"),upper(nvl(:P1_REPORT_SEARCH,"FIRST_NAME"))) > 0 or
    instr(upper("LAST_NAME"),upper(nvl(:P1_REPORT_SEARCH,"LAST_NAME"))) > 0 or
    instr(upper("CITY"),upper(nvl(:P1_REPORT_SEARCH,"CITY"))) > 0 or
    ...and so on, one statement for each field we want to search for.
    I don't understand
    1) why apex uses instr instead of a standard LIKE '%..%'..the choice depends on performance issues?
    2) instead of using the nvl "trick" IMHO should be enough:
    :P1_REPORT_SEARCH IS NULL OR
    instr(upper("FIRST_NAME"),upper(:P1_REPORT_SEARCH,"FIRST_NAME")) > 0 or
    instr(upper("LAST_NAME"),upper(:P1_REPORT_SEARCH,"LAST_NAME")) > 0 or
    instr(upper("CITY"),upper(:P1_REPORT_SEARCH,"CITY")) > 0 or
    which is clearer and maybe faster; am I right?
    3) the nvl "trick" doesn't work for empty fields: if a record has null values for all the fields you search for in the query that record won't be displayed in the report even if the user leave the search box blank and click on "go"...this seems to me a bug, has it been resolved in the last release of APEX?
    4) Last question: this is not actually related to "basic" search...if I need to verify is a search field (imagine an advanced search form) is left blank I can just use ":FIELD_NAME IS NULL"...this work for textboxes but doesn't work for listboxes, which needs something like ":FIELD_NAME = '%null%'" because %null% seems to be the value returned by those kind of fields when left blank...am I right? Which is the meaning of %null%?
    Thank you very much and sorry for the long post.
    Ciao.
    Eugenio

    Hi Eugenio,
    Firstly, you are welcome to alter the generated query however you wish. If you don't like what is created by APEX, change it to suit your purposes.
    To answer your questions:
    1. It's easier for customers to understand INSTR than LIKE. With LIKE, you have to be careful to include the wildcard characters. Performance should be roughly equivalent.
    2. The NVL is necessary. If it were not, then if no search criteria was specified, then no rows would ever be returned. Try your suggested SQL out yourself.
    3. You're correct - this won't work if you have null values for all fields. That's not a bug in APEX, though. I don't know too many application requirements where you would want to maintain null values for all fields. Do you?
    4. You are correct. For List Manager and other items which use a List of Values, if you don't specify a string to use for NULL, then '%null%' will be used. You can override this in the LOV settings for the item, but keep in mind - you have to specify something, otherwise, '%null%' will be used. Patrick Wolf posted a nice, generic solution to this: Re: Null value handling in LOVs
    I hope this helps.
    Joel

  • Installing BDB XML and including it into a java application

    Hi there :)
    I'm new to BDB XML and I've got some questions about it :
    1/ The documentation says about BDB XML that it is based on BDB. Does it mean that I need to install the standard BDB first to use BDB XML ? (I'm pretty sure I don't have to but I would like a confirmation =))
    2/ I want to developp a web application which will use BDB XML. The documentation says I need to add some jars into my project to use it but do I need to install it on my computer (Windows XP) first ? (with the .msi file ?)
    3/ Where will the xml files be physically stored after I add them into the database ? (what is the default database folder on the hard disk ?)
    Thank you for you help :)
    Regards,
    Gary

    Hi Gary,
    1) You need to install only DB XML. BDB is bundled with DB XML, so you don't have to worry.
    2) Yes, you need to add two jars: db.jar and dbxml.jar. But these jars depend on native DB XML libraries, so you will have to install DB XML anyway: only jars wouldn't be sufficient. Moreover, I would recommend to install DB XML from sources providing the --enable-java+ flag to the buildall.sh script
    3) All XML files will be stored in a container or containers (depending on how many of those you will decide to utilize in your application). It is up to you where you are going to reside DB XML environment/containers -- DB XML is quite low-level XML-database.
    Hope this helps,
    Vyacheslav

  • BDB XML DOM Implementation

    Hi all---
    I have some newbie questions about BDB/XML's DOM Implementation and its interaction with Xerces-c.
    We are trying to deploy BDB/XML underneath our current database abstraction layer. The application makes use of Xerces-c and the abstraction layer query/get interfaces return objects of type Xercesc-XXXX::DOMDocument*. I can easily get documents out of BDB/XML and return the DOM to the upper layer by use of the XmlDocument::getContentAsDOM() interface.
    The problem occurs as the upper application layers start to manipulate the Document. For instance, in order to print the document, some code creates a serializer (DOMWriter) using Xerces-c, but when applied to the Document returned by BDB/XML the serializer corrupts the DOM and we get an ugly crash.
    I'm completely new to the intricacies/compatibility issues between DOM implementations---is what I am describing here supported in theory? Or is there a fundamental problem---some incompatibility between a Xerces-c DOMWriter and a BDB/XML DOMDocument?
    fwiw, the error appears to be caused by the Xerces-c memory manager which apparantly has no idea about pages being used by BDB, and is allocating structures on top of BDB objects.
    Any ideas? Advice where to investigate?
    thanks,
    SF

    Steve,
    First, the Xerces-C DOM implementation in BDB XML is not entirely complete, and is mostly read-only from an application perspective. So if you are doing anything to modify the returned DOM you run some risk. It's implemented using Xerces-C 2.7.
    Second, the availability of the Xerces-C DOM in BDB XML has a limited lifetime. It will almost certainly not be availble in the next release of BDB XML, so it's not something you should rely on. You may be best off serializing your results and if you want to manipulate them using Xerces, re-parse it into a DOM implementation that you control. I realize there is loss of efficiency in doing this.
    In our next release there are changes being made (for very good reasons) that make it impossible to maintain the XmlDocument::getContentAsDOM() interface. If we did keep it, we'd just be serializing and re-parsing anyway.
    Regards,
    George

  • About DBD XML physical storage xml document

    Recently I survey about BDB XML,
    and I want to know how is it to storage the XML document, what format?? is like Natix storage like in page file ?? So if there are not index can be utilize, the system will use tree travel method to get query answer.
    Or there are any document or technical manual support to introduction how DBD XML system internal storage format,how query process, how index build ....etc, because I ready some aboubt document but it major explain how to use the system (like API introduction).
    Thanks very much !!

    Hi Henry,
    The physical nodes store a large amount of information in a record, including it's node ID, it's parent's node ID, it's level in the tree and the node ID of it's last descendant.
    Ancestor-descendant relationships can be calculated using the node ID and last descendant ID as upper and lower bounds. Parent-child relationships additionally use the node level information. Sibling relationships need to use the parent's ID to check they have the same parent.
    Navigation, on the other hand, uses other node IDs stored in the physical node, or implicit information. For instance if a node has children, it's first child is always the next node record stored. The last child ID is stored in the physical node, since this cannot be similarly calculated, as are the next and previous sibling node IDs.
    If you are interested look in dbxml/src/dbxml/nodeStore/NsFormat.(hpp|cpp), which contains the marshaling code for the node storage format.
    John

  • Doubt About ASM

    Hi All,
    I have some doubts about ASM.
    I have installked ASM on my linux box and created two databases which are totaly
    using ASM. I created controlfile on local file system only.
    1. Suppose I want to remove my one database. I just shut abort the database and
    remove the controlfile. Now I need to remove all files belong to DB_1 from ASM.
    I am using one single Diskgroup for both Databases. So, can not drop Diskgroup.
    Now, How can I identify files of DB_1 database ?
    2. If I create a normal redundancy diskgroup using two failgroup. It shows me total_mb
    and free_mb space as available. Whenever my database takes 1 extent of 100mb
    it reduces 200mb from free_mb column which is also understandable.
    But I am not able to figureout about col required_mirror_free_mb. What it is saying.
    I have read oracle documentation but still not able to understand.
    Can some please shed some light on this ?
    Regards,
    Js

    If you are using 10.2 then just use asmcmd .
    It is very similar to a regular Unix command prompt interface, when in fact it runs sql commands against ASM instance.
    If you are using 10.1 then you have ti run those sql commands manually.

Maybe you are looking for

  • How can I use Aperture 3 on my new iMac with Maverick?

    How can I use Aperture 3 on my new iMac with Maverick?

  • Goods receipt of by product in subcontracting process

    the by-product are produced in subcontracting. we have to take back this by product into our company from the subcontractor. we have to use which movement type to take it back and the transaction code

  • BPM and UI5 interaction

    Hi all, I'm investigating how BPM human-centric tasks can be developed with UI5. I've gone through the available excellent blogs, so I'm aware of the OData API exposed by BPM. However I still have one major question: For a number of reasons, I might

  • What is missing from Photoshop CS6 ???

         I tried the new PhotoShop CS6 (beta) but aside from some innovations of which my favorite is the interface (The interface is one of the best innovation that Adobe have made finally an interface that does not make you eyes hurt) and content aware

  • Problem with Installation itunes + iPod

    I will installated my new iPod Video 30 GB, but after I installed the Programms itunes + iPodsofteware from the CD there is nothing installed at all. I can´t find any folders. When I look with the exporer search i can see that the installation starte