What are the folders 0-9 and a-f in firefox 4.0b cache ?

'''What are the folders 0-9 and a-f in firefox 4.0b cache ? '''

Even worst FF 4 does not store videos and .swf files in cache. When you watch a video or play a game and leave the page and want to see the video or play the game again you must wait for it to download again. It is not saved in any of the cache folders. What is going on with Mozilla and Firefox. Have they bowed down to the big boys with this new browsers.

Similar Messages

  • What are the advantages of compressor and it it even necessary

    what are the advantages of compressor and it it even necessary?

    Necessary for some and not for others – probably a large majority – who can by with the presets avalaible in FCX.
    The users who need Compressor are those who want to control the parameters of the encodes to get the best possible trade-off between file size and quality. Or those who want to do things like standards conversions, complex frame speed changes, better re-scaling capabilities, de-interlacing, re-interlacing, output formats beyond which are available in FCX, chapter markers for DVD and Blu-Ray authoring, batch conversions for multiple purposes through droplets, access to clusters for faster rendering.
    Russ

  • What are the units of "Width" and "Height" of a Shape?

    What are the units of  "Width" and "Height" properties of a Shape when programming?
    Something odd like points or twips or tweedles or nibbles?
    http://www.ransen.com Cad and Graphics software

    Width and Height are properties of type Single; they represent the dimensions of the shape in points, where 72 points = 1 inch.
    Regards, Hans Vogelaar (http://www.eileenslounge.com)

  • What are the settings for datasource and infopackage for flat file loading

    hI
    Im trying to load the data from flat file to DSO . can anyone tel me what are the settings for datasource and infopackage for flat file loading .
    pls let me know
    regards
    kumar

    Loading of transaction data in BI 7.0:step by step guide on how to load data from a flatfile into the BI 7 system
    Uploading of Transaction data
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( Transaction data )
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to create ODS( Data store object ) or Cube.
    • Specify name fro the ODS or cube and click create
    • From the template window select the required characteristics and key figures and drag and drop it into the DATA FIELD and KEY FIELDS
    • Click Activate.
    • Right click on ODS or Cube and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.
    4. Monitor
    Right Click data targets and select manage and in contents tab select contents to view the loaded data. There are two tables in ODS new table and active table to load data from new table to active table you have to activate after selecting the loaded data . Alternatively monitor icon can be used.
    Loading of master data in BI 7.0:
    For Uploading of master data in BI 7.0
    Log on to your SAP
    Transaction code RSA1—LEAD YOU TO MODELLING
    1. Creation of Info Objects
    • In left panel select info object
    • Create info area
    • Create info object catalog ( characteristics & Key figures ) by right clicking the created info area
    • Create new characteristics and key figures under respective catalogs according to the project requirement
    • Create required info objects and Activate.
    2. Creation of Data Source
    • In the left panel select data sources
    • Create application component(AC)
    • Right click AC and create datasource
    • Specify data source name, source system, and data type ( master data attributes, text, hierarchies)
    • In general tab give short, medium, and long description.
    • In extraction tab specify file path, header rows to be ignored, data format(csv) and data separator( , )
    • In proposal tab load example data and verify it.
    • In field tab you can you can give the technical name of info objects in the template and you not have to map during the transformation the server will automatically map accordingly. If you are not mapping in this field tab you have to manually map during the transformation in Info providers.
    • Activate data source and read preview data under preview tab.
    • Create info package by right clicking data source and in schedule tab click star to load data to PSA.( make sure to close the flat file during loading )
    3. Creation of data targets
    • In left panel select info provider
    • Select created info area and right click to select Insert Characteristics as info provider
    • Select required info object ( Ex : Employee ID)
    • Under that info object select attributes
    • Right click on attributes and select create transformation.
    • In source of transformation , select object type( data source) and specify its name and source system Note: Source system will be a temporary folder or package into which data is getting stored
    • Activate created transformation
    • Create Data transfer process (DTP) by right clicking the master data attributes
    • In extraction tab specify extraction mode ( full)
    • In update tab specify error handling ( request green)
    • Activate DTP and in execute tab click execute button to load data in data targets.

  • What are the prerequisites to install and configure Oracle Coherence 3.4

    What are the prerequisites to install and configure an Oracle Coherence (3.4 Grid version) on RHEL5.0 system?
    I want to make an Oracle Coherence Gid Data system for testing purpose using 3 machines.
    What kind of network configuration is OK?
    What software should be installed in advance?
    Thank you

    Hi,
    I would read through the Testing and Tuning section of the following page.
    http://coherence.oracle.com/display/COH34UG/Usage+(Full)
    Even though you are not about to go into production I think the Production Checklist has some very useful information.
    http://coherence.oracle.com/display/COH34UG/Production+Checklist
    -Dave

  • What are the BIW setup, Benefits and Challenges, it's very urgent,

    Dear Guru's
    I need to prepare a report to my client stating that what are the BIW Setup, Benefits and challenges. till now. can any one help me to build the browser.
    thanks and regards
    C.S.Ramesh

    Hi,
    Pls chk these to get details:
    http://www.asug.com/client_files/Calendar/Upload/hpsapbi_burke.ppt
    http://www.citrix.co.uk/site/resources/dynamic/partnerDocs/SAP_Market_Pitch_Final.ppt
    http://www.jhu.edu/hopkinsone/Secure_Private/ProjectAreas/Sponsored/documents/ToBeProcessSPAwardSet-UpOnly.pdf#search=%22sap%20bw%20presentation%20slides%22
    Hope this helps,
    Regards
    CSM Reddy

  • What are the differences between Logos and LogosXT?

    What are the differences between Logos and LogosXT?

     Logos XT is a networking middle-layer maintained by the LabVIEW Network Technologies and Security group. Logos XT provides a thin layer on top of TCP/IP to simplify some common network tasks.
    The underlying foundation for NI networking is called Logos.
    I believe that the basic idea is Logos is what is going on behind the scenes at the base level and Logos XT lets you build your own networking protocols on top of Logos.  Logos XT would be used if you want to make your own networking protocol instead of using TCP/IP or UDP.
    Scott A
    SSP Product Manager
    National Instruments

  • What are the differences between jdk and sdk

    what are the differences between jdk and sdk?? thanks

    Just marketing whims.
    SDK = software development kit
    JDK = Java development kti
    Sun has use both names to refer to its SDK for Java. I forget which one they're using now, but they've switched a couple of times.

  • What are the differences between Essbase and Planning?

    What are the differences between Essbase and Planning?

    Planning is an enterprise application built around the Essbase OLAP engine.
    You can create planning applications with Essbase only, but Planning uses best practises and has built-in enterprise features.
    Brian Chow

  • What are the differences between trigger and constraints?

    what are the differences between trigger and constraints?

    Try the documentation, this would be a good starting point: How Oracle Enforces Data Integrity
    C.

  • What are the differences between iphone and iphone 5c 5s?

    What are the differences between iphone and iphone 5c 5s?

    Read here:
    http://www.apple.com/iphone/compare/
    The iPhone 5C is basically the original iPhone 5 in different colors.
    While the 5S gets the improved hardware.
    Read the link above for a feature comparison.

  • HT5295 What are the basics of creating and distributing Podcasts?

    What are the basics of creating and distributing Podcasts?

    Search the web for "creating podcasts" and you'll find a plethora of information.
    Regards.

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

  • What are the differences between api and sdk

    Hi,
    Could anybody clarify for me what are the difference between API and SDK? I googled for the answer, and couldn't find any thing on this topic.
    Many thanks in advance.
    javasfan

    Is it correct to say that "a SDK includes all the
    APIs"? or "API sits on top of the SDK"?It's a bit weird to say either. First, the JDK doesn't include all APIs. Just the J2SE core API. Others, like 3rd party libraries or J2EE are not included. Second, if you mean the API docs, they're also not included IIRC.
    It'd technically only be correct to say "the SDK provides an API", I guess. The example is very lame, but: if the SDK is a machine, the API is the sum of its buttons and levers and gauges and intakes and outlets and exhausts. The API docs are the manual.

  • What are the differences between tracing and hierarchical profiler?

    There are many terms used by people, I am just wondering what are the differences between tracing and hierarchical profiler? Aren't they the same thing?
    Thanks a lot.

    Instrumentation and tracing are two different things; in fact they belong to two different categories - one's a thing the other's an activity.
    Tracing is following the execution path of a program. Tracing shows us the the calls a program makes, perhaps the internal choices it makes (ifs and whiles), exceptions thrown, etc,
    Instrumentation is code we build into our program to produce a record of its status. There are different techniques, from using DBMS_APPLICATION_INFO calls to monitor status to writing log tables or files. Instrumentation can be used to generate a trace; it can do profiling; it can provide information reports on outputs and exceptions.
    In my opinion DBMS_TRACE and DBMS_HPROF are not instrumentation, because they are external to the program under investigation, rather than built into it. However, there is an obvious overlap between the insight they provide and what we might do with our own logging.
    Cheers, APC
    PS
    970992 wrote:
    you are not a stranger for me.Really? I don't believe we've met. I'm pretty certain I don't know anybody whose name is just a number.

Maybe you are looking for