Searching through a Large Database

There's a specialty program that I use all the time and it's built for windows only. This program pretty much searches through a big database of articles that this organization has published and you can read them.
I'm wondering, what should I do to redevelop the database? Is there a good program to use? and what language I should use?
Message was edited by: Supanatral

+I just got to track down who did it I guess.+
If you have that as a resource I'd very much recommend doing that. No reason to re-reverse engineer the wheel.
+Is there a good step by step how to for mysql?+
There are many scattered around the web.
Here is Apple's:http://developer.apple.com/internet/opensource/osdb.html
Here is MySQLs: http://dev.mysql.com/doc/refman/5.0/en/mac-os-x-installation.html
Also some OS X seem to have shipped with OpenBase installed and some don't. I'm not sure what the process or license is on that but it is much friendly and GUI operative than MySQL. If you have it it's worth looking into if it'll work for you.
Good Luck,
=Tod

Similar Messages

  • Searching through very large vectors

    I am working on a way to process two flat tab delimited files into a tree, assign a x and y coordinate to each node in the tree and output all the nodes (with their coordinates) to a new flat file.
    I currently have a program that works pretty well. It roughly uses the following flow.
    - Read both files into memory (by opening the file reading each line and load the appropriate data from each line into a Vector, making sure no duplicates are entered by comparing the currentline to the last line.
    - Using the first vector (which contains the strating nodes) search through the second vector (which contains parent child relationships between 2 nodes) to construct the tree. For this tree I use a XML DOM Document. In this logic I use a for loop to find all the children for the given node. I store the index of each found reference and when all children are found I loop through all the indexes and delete those records from the parent-child vector.
    - After the tree is created I walk through the tree and assign each node a x and y attribute.
    - When this is done I create a NodeList and use are for-loop to write each node (with x and y) to a StringBuffer which is then written to a file. In this process for each new node that is written I check (in the StringBuffer) if the node (name) is present. If not I write the new Node.
    - For debugging purposes I write all the references from the second Vector to a file and output the XML DOM tree to a XML file.
    This program works wel. It handles files with 10000 start nodes and 20000 parent-child references (30000 nodes in total) in under 2 minutes (using 1:20 for the generation of the output file).
    However when the volume of these file increase it starts to struggle.
    As the ultimate test I ran it with a file that contains 250000 start nodes and 500000 references. For it to run I need to use the -Xmx256m parameter to allocate extra memory. But I ran it for 2 hours and killed it because I didn't want to wait longer.
    What I would like to know is how I can approach this better. Right now I'm loading the data from the files into memory entirely. Maybe this isn't the best approach.
    Also I'm looping through a Vector with 500000 elements, how can this be done more efficiently? However the reference vector isn't sorted in any way.

    Hi,
    That's no problem.. Here's some sample code:
    package tests;
    import java.util.List;
    import java.util.Map;
    import java.util.HashMap;
    import java.util.LinkedList;
    import java.util.Iterator;
    class Example {
        private List roots;
        private Map elements;
        public Example() {
            roots = new LinkedList();
            elements = new HashMap();
        public void initRoots(String[] rows) {
            for (int i=0; i<rows.length; i++) {
                String[] parts = rows.split(" ");
    String name = parts[0];
    roots.add(name);
    elements.put(name, new Node(name));
    public void addChilds(String[] rows) {
    for (int i=0; i<rows.length; i++) {
    String[] parts = rows[i].split(" ");
    String parentId = parts[1];
    String name = parts[2];
    addNode(parentId, name);
    private void addNode(String parentId, String name) {
    Node current = (Node)elements.get(name);
    if (current == null) {
    current = new Node(name);
    elements.put(name, current);
    Node parent = (Node)elements.get(parentId);
    if (parent == null) {
    //Parent is missing, is that a problem?. Create it now.
    parent = new Node(parentId);
    elements.put(parentId, parent);
    return;
    parent.addChild(current);
    public void printTree() {
    for (Iterator it = roots.iterator(); it.hasNext(); ) {
    String id = (String)it.next();
    printChildren(id, 1);
    private void printChildren(String id, int depth) {
    Node node = (Node)elements.get(id);
    System.out.println(node);
    private static final class Node {
    private String name;
    private List children;
    private Node(String name) {
    this.name = name;
    children = new LinkedList();
    public void addChild(Node node) {
    children.add(node);
    public String toString() {
    return name + " " + children;
    public static void main(String[] args) throws Exception {
    Example test = new Example();
    test.initRoots(new String[] {
    "SU_1 1 1 1 0 0 0 0",
    "SU_2 1 1 1 0 0 0 0",
    "SU_3 1 1 1 0 0 0 0"
    test.addChilds(new String[] {
    "COM_1 SU_1 PR_1 0 0 0 0 0",
    "COM_1 PR_1 ST_1 0 0 0 0 0",
    "COM_2 SU_2 PR_2 0 0 0 0 0",
    "COM_2 PR_2 ST_2 0 0 0 0 0",
    "COM_3 SU_3 PR_3 0 0 0 0 0",
    "COM_3 PR_3 ST_3 0 0 0 0 0"
    test.printTree();
    The execution prints:
    SU_1 [PR_1 [ST_1 []]]
    SU_2 [PR_2 [ST_2 []]]
    SU_3 [PR_3 [ST_3 []]]
    /Kaj

  • Intermedia search through a database link.

    Has anyone been able to do a search through a database link on an intermedia index in another database?
    My sql is:
    select title
    from [email protected]
    where contains (title,'test')>0;
    I get the following errors:
    ORA-20000:
    ORA-02063:
    null

    I guess you cannot do this. I read somewhere (not on top of my head where) that this is not supported.
    null

  • Can we query through 5-6gb large database with AIR

    As it creates one single file for whole database, can we have 5-6 GB large database if an AIR application requires

    There's no arbitrary limit to the database size. It would depend on performance and the user's file system, I suspect. Only you could judge the performance aspect as it should depend on the complexity of your database and queries.

  • Move large database to other server using RMAN in less downtime

    Hi,
    We have large database around 20TB. We want to migrate (move) the database from one server to other server. We do not want to use standby option.
    1)     How can we move database using RMAN in less downtime
    2)     Other than RMAN is there any option is available to move the database to new server
    For option 1 (restore using RMAN),
    Whether below options are valid?
    If this option is valid, how to implement this?
    1)     How can we move database using RMAN in less downtime
    a)     Take the full backup from source (source db is up)
    b)     Restore the full backup in target (source db is up)
    c)     Take the incremental backup from source (source db is up)
    d)     Restore incremental backup in target (source db is up)
    e)     Do steps c and d, before taking downtime (source db is up)
    f)     Shutdown and mount the source db, and take the incremental backup (source db is down)
    g)     Restore last incremental backup and start the target database (target is up and application is accessing this new db
    database version: 10.2.0.4
    OS: SUN solaris 10
    Edited by: Rajak on Jan 18, 2012 4:56 AM

    Simple:
    I do this all the time to relocate file system files... But the principle is the same. You can do this in iterations so you do not need to do it all at once:
    Starting 8AM move less-used files and more active files in the afternoon using the following backup method.
    SCRIPT-1
    RMAN> BACKUP AS COPY
    DATAFILE 4 ####"/some/orcl/datafile/usersdbf"
    FORMAT "+USERDATA";
    Do as many files as you think you can handle during your downtime window.
    During your downtime window: stop all applications so there is no contention in the database
    SCRIPT-2
    ALTER DATABASE DATAFILE 4 offline;
    SWITCH DATAFILE 4 TO COPY;
    RECOVER DATAFILE 4;
    ALTER DATABASE DATAFILE 4 online;
    I then execute the delete of the original file at somepoint later - after we make sure everything has recovered and successfully brought back online.
    SCRIPT-3
    DELETE DATAFILECOPY "/some/orcl/datafile/usersdbf"
    For datafiles/tablespaces that are really busy, I typically copy them later in the afternoon as there are fewer archivelogs that it has to go through in order to make them consistent. The ones in the morning have more to go through, but less likelihood of there being anything to do.
    Using this method, we have moved upwards 600G at a time and the actual downtime to do the switchover is < 2hrs. YMMV. As I said, this can be done is stages to minimize overall downtime.
    If you need some documentation support see:
    http://docs.oracle.com/cd/E11882_01/server.112/e18951/asm_rman.htm#CHDBDJJG
    And before you do ANYTHING... TEST TEST TEST TEST TEST. Create a dummy tablespace on QFS and use this procedure to move it to ASM to ensure you understand how it works.
    Good luck! (hint: scripts to generate these scripts can be your friend.)

  • Efficient searching in a large XML file for specific elements

    Hi
    How can I search in a large XML file for a specific element efficiently (fast and memory savvy?) I have a large (approximately 32MB with about 140,000 main elements) XML file and I have to search through it for specific elements. What stable and production-ready open source tools are available for such tasks? I think PDOM is a solution but I can't find any well-known and stable implementations on the web.
    Thanks in advance,
    Behrang Saeedzadeh.

    The problem with DOM parsers is that the whole document needs to be parsed!
    So with large documents this uses up a lot of memory.
    I suggest you look at sometthing like a pull parser (Piccolo or MPX1) which is a fast parser that is program driven and not event driven like SAX. This has the advantage of not needing to remember your state between events.
    I have used Piccolo to extract events from large xml based log files.
    Carl.

  • Mysql query (search multiple tables in database)

    I have 12 tables in a database - january through to december.
    I need to search all 12 tables for 'keyworrd' phrases submitted by the user through a search form.
    Must be a more streamlined way of doing it than below using 'UNION'. I have incorporated 2 tables in the below query but  I need a more 'condensed' query for all 12 tables?
    $sql = ('SELECT * FROM january WHERE tourTitle = "'.$keyword.'" UNION SELECT * FROM february WHERE tourTitle = "'.$keyword.'"');
    Cheers
    Os

    bregent wrote:
    >That's what I did last year but  thought I'd break it down this year into 12 easier to work with tables.
    No, Ben is correct. Using 1 table for each month is absolutely the wrong way. It violates basic rules of normalization and causes all sorts of problems.
    >Breaking it down appeals to be more so I can keep all the relevant months
    >together instead of potentially becoming scattered throught-out one table.
    That's what you use the Order By clause for.
    >If by any chance the client says they want to update x, y or z I can go
    >straight to the month in question without the necessity to flip through
    >dozens of pages in phpMyAdmin as there is no real CMS management in place for this process.
    Not sure what you are saying. Performing inserts, updates and queries is much simpler using a single table.
    Whenever someone asks for a way to search through multiple tables, it tells me that the data structure is not designed well.
    When I did this job last year there was about 60 pages created in phpMyAdmin. The records for January could be anywhere on those 60 pages as I may have to add additional records much later on in the process.
    My thinking behind this was to keep all the month entries together so I could view them easily in phpMyAdmin.
    Now due to my lack of knowlege about phpMyAdmin it could be possible to create a query to show only the january entries, I suspect it can do this.
    I agree it is a lot simpler using 1 table to select and search through BUT I need if the ocassion arises to be able to view all the january or february entries etc one after the other, not 10 on page 2 and 3 on page 7 and 5 more on page 47 of phpMyAdmin.
    So i quess what I really need is to write a select query in phpMyAdmin which only shows the selected entries for the month requested. I have not done much investigation into what phpMyAdmin can do........so I suppose I need to.
    EDITED:
    Arrgh you see IT WAS SO SIMPLE:
    SELECT * FROM `tours` WHERE month = "March"
    It's because I'm frightened of the bloody thing in case I mess something up!

  • SAP EHP Update for Large Database

    Dear Experts,
    We are planning for the SAP EHP7 update for our system. Please find the system details below
    Source system: SAP ERP6.0
    OS: AIX
    DB: Oracle 11.2.0.3
    Target System: SAP ERP6.0 EHP7
    OS: AIX
    DB: 11.2.0.3
    RAM: 32 GB
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Please advise on this.
    Regards,
    Raja. G

    Hi Raja,
    The main concern over here is the DB size. It is approximately 3TB. I have already gone through forums and notes it is mentioned the DB size is not having any impact on the SAP EHP update using SUM. However am stil thinking it will have the impact in the downtime phase.
    Although 3TB DB size may not have direct impact on the upgrade process, downtime of the system may vary with larger database size.
    Points to consider
    1) DB backup before entering into downtime phase
    2) Number of Programs & Tables stored in the database. ICNV Table conversions and XPRA execution will be dependent on these parameters.
    Hope this helps.
    Regards,
    Deepak Kori

  • How search happens in oracle Database

    Select emp_name from employee where salary > 10000;
    How a search occurs in oracle database where there is a index in salary column and when there is no index in salary column

    user8850066 wrote:
    i just want to know what happens internally to get the data.That is actually very complicated to answer. Oracle is a powerful and sophisticated piece of software, and what happens internally can vary based on configuration, data volume, and what else is happening. Books, papers and even entire conferences are dedicated to this question. Many myths abound, mostly from oversimplifications and improper assumptions about how Oracle works. The docs are good, but not so much for what is really happening internally.
    It is good to be curious. As the others said, read the concepts manual, then read the performance guide and apply what you see to learn what is happening. As you learn more and more, you'll discover even many guru's consider the database a black box that we can only poke at and infer what is happening inside. Oracle does have a lot of instrumentation to see what is going on, but when it comes down to it, we're all still surprised at times about what must be happening.
    I'd also recommend Tom Kyte's books, after you've digested the basics (some of which he wrote anyways). He has two great strengths: He explains what is happening clearly, and he shows you how to figure it out for yourself. Also Richard Foote's blog is excellent for the index half of your question, though it might be a bit much if you don't know the basic concepts.
    As you read through the concepts, you'll realize your question has to account for things like:
    Is it faster to get the data with one process or many?
    Are other people modifying the data?
    Do you want to get all the data as quickly as possible, or some of the data faster?
    Do you know a better way to get the data than Oracle can figure out?
    What are you going to do with the data when you get it?
    What if the computer crashes while you are getting it?
    What if the definition of the table changes while you are trying to get the data?
    What if the data is far away?
    What if someone doesn't want you to see it?
    What if you also need to get some other data too?
    All these and more can influence what Oracle does internally. On some basic level, you can say Oracle will do a full table scan or a modified b-tree index search, but beyond that, it can go nuts.

  • Problem rename sharepoint 2010 search service application admin database

    Hi all,
    i have a problem that hopefully someone has an answer to.  i am not too familiar with sharepoint so please excuse my ignorance.
    we have sharepoint 2010 on a windows 2008r2 server.  everything seems to work fine.  but as you know, the default database names are horrendous.  i have managed to rename all of them, except for the "search service application" admin
    database.
    the default is: Search_Service_Application_DB_<guid>
    the other 2 databases (crawl and property) were renamed without a problem.
    we are following the article from technet on how to rename the search service admin db (http://technet.microsoft.com/en-nz/library/ff851878%28en-us%29.aspx).  it says to enter the following command:
    $searchapp | Set-SPEnterpriseSearchServiceApplication -DatabaseName "new database name" -DatabaseServer "dbserver"
    however, i get an error about identity being null.  no big deal, i add the -Identity switch and the name of my search service application.  but the real problem comes the error it throws:
    Set-SPEnterpriseSearchServiceApplication : The requested database move was aborted as the associated search application is not paused.
    At line:1 char:54
    + $searchapp | Set-SPEnterpriseSearchServiceApplication <<<<  -Identity "Search Service Application" -DatabaseName "SharePoint2010_Search_Service_Application_DB" -DatabaseServer "dbserver"
        + CategoryInfo          : InvalidData: (Microsoft.Offic...viceApplication:
       SetSearchServiceApplication) [Set-SPEnterpriseSearchServiceApplication], I
      nvalidOperationException
        + FullyQualifiedErrorId : Microsoft.Office.Server.Search.Cmdlet.SetSearchS
       erviceApplication
    when i look at the crawling content sources, i see "Local SharePoint Sites" and it's status is Idle.  i even looked at this article on how to pause the search to no avail.  (http://technet.microsoft.com/en-us/library/ee808864.aspx)
    does someone know how i can rename my Search Service Applcation Admin database properly?  or at least "pause" that service so i can rename it?
    thank you all in advanced

    If you want to have no guids for your search admin db, i recommend you check out this script :)
    just delete your search service application (assuming you have just started)
    Alpesh Nakar's Blog
    Alpesh
    Just SharePoint Just SharePoint Updates
    SharePoint Conference Southeast Asia
    Oct 26-27 2010 Contributing Author
    SharePoint 2010 Unleashed
    MCTS: SharePoint 2010 Configuration
    MCITP: SharePoint 2010 Administrator

  • I downloaded IOS6 and all my apps, including the App Store icon, disappeared. If I go to the Passport icon, there is an App Store button, but I have to search through all the apps to find the one I want  and then click on "Open" to use it.  Help!

    I downloaded IOS6 and all of my app icons, including the App Store icon disappeared. Now to use an icon, I have to go to Passport and click on the App Store button at the bottom and search through all of the apps to find the one I want and then click on Open. There doesn't seem to be a way to delete the app and start over.

    Hey PlayerPS,
    Thanks for the question, and welcome to Apple Support Communities.
    It sounds like the application you are looking for is indeed still on your iPhone. You can confirm this by searching in the Spotlight Search for this application. It may have accidentally been moved to a folder, or an additional Home screen:
    iOS: Understanding Spotlight Search
    http://support.apple.com/kb/HT3636
    via http://manuals.info.apple.com/en_US/iphone_user_guide.pdf
    Thanks,
    Matt M.

  • Problem with  large databases.

    Lightroom doesn't seem to like large databases.
    I am playing catch-up using Lightroom to enter keywords to all my past photos. I have about 150K photos spread over four drives.
    Even placing a separate database on each hard drive is causing problems.
    The program crashes when importing large numbers of photos from several folders. (I do not ask it to render previews.) If I relaunch the program, and try the import again, Lightroom adds about 500 more photos and then crashes, or freezes again.
    I may have to go back and import them one folder at a time, or use iView instead.
    This is a deal-breaker for me.
    I also note that it takes several minutes after opening a databese before the HD activity light stops flashing.
    I am using XP on a dual core machine with, 3Gigs of RAM
    Anyone else finding this?
    What is you work-around?

    Christopher,
    True, but given the number of posts where users have had similar problems ingesting images into LR--where LR runs without crashes and further trouble once the images are in--the probative evidence points to some LR problem ingesting large numbers.
    I may also be that users are attempting to use LR for editing during the ingestion of large numbers--I found that I simply could not do that without a crash occuring. When I limited it to 2k at a time--leaving my hands off the keyboard-- while the import occured, everything went without a hitch.
    However, as previously pointed out, it shouldn't require that--none of my other DAMs using SQLite do that, and I can multitask while they are ingesting.
    But, you are right--multiple single causes--and complexly interrated multiple causes--could account for it on a given configuration.

  • Looking for a free iOS 4 app that can search through .pdf files or spreadsheets

    Looking for a free iOS 4 app that can search through .pdf files or spreadsheet    
    Thanks

    Hey there
    "pdf creator" for iPad works flawlessly for me working with pdf files
    It takes care of all my needs
    I'm not sure about sending via Wifi or Bluetooth but I send them via e- mail all the time
    Possibly it could handle your needs as well
    Just type it into the App Store search field and the first one that comes up is the one I use
    Jump on over there and read up on it before buying and see if it will help you 
    Hope this helps
    Regards

  • How can we suggest a new DBA OCE certification for very large databases?

    How can we suggest a new DBA OCE certification for very large databases?
    What web site, or what phone number can we call to suggest creating a VLDB OCE certification.
    The largest databases that I have ever worked with barely over 1 Trillion Bytes.
    Some people told me that the results of being a DBA totally change when you have a VERY LARGE DATABASE.
    I could guess that maybe some of the following topics of how to configure might be on it,
    * Partitioning
    * parallel
    * bigger block size - DSS vs OLTP
    * etc
    Where could I send in a recommendation?
    Thanks Roger

    I wish there were some details about the OCE data warehousing.
    Look at the topics for 1Z0-515. Assume that the 'lightweight' topics will go (like Best Practices) and that there will be more technical topics added.
    Oracle Database 11g Data Warehousing Essentials | Oracle Certification Exam
    Overview of Data Warehousing
      Describe the benefits of a data warehouse
      Describe the technical characteristics of a data warehouse
      Describe the Oracle Database structures used primarily by a data warehouse
      Explain the use of materialized views
      Implement Database Resource Manager to control resource usage
      Identify and explain the benefits provided by standard Oracle Database 11g enhancements for a data warehouse
    Parallelism
      Explain how the Oracle optimizer determines the degree of parallelism
      Configure parallelism
      Explain how parallelism and partitioning work together
    Partitioning
      Describe types of partitioning
      Describe the benefits of partitioning
      Implement partition-wise joins
    Result Cache
      Describe how the SQL Result Cache operates
      Identify the scenarios which benefit the most from Result Set Caching
    OLAP
      Explain how Oracle OLAP delivers high performance
      Describe how applications can access data stored in Oracle OLAP cubes
    Advanced Compression
      Explain the benefits provided by Advanced Compression
      Explain how Advanced Compression operates
      Describe how Advanced Compression interacts with other Oracle options and utilities
    Data integration
      Explain Oracle's overall approach to data integration
      Describe the benefits provided by ODI
      Differentiate the components of ODI
      Create integration data flows with ODI
      Ensure data quality with OWB
      Explain the concept and use of real-time data integration
      Describe the architecture of Oracle's data integration solutions
    Data mining and analysis
      Describe the components of Oracle's Data Mining option
      Describe the analytical functions provided by Oracle Data Mining
      Identify use cases that can benefit from Oracle Data Mining
      Identify which Oracle products use Oracle Data Mining
    Sizing
      Properly size all resources to be used in a data warehouse configuration
    Exadata
      Describe the architecture of the Sun Oracle Database Machine
      Describe configuration options for an Exadata Storage Server
      Explain the advantages provided by the Exadata Storage Server
    Best practices for performance
      Employ best practices to load incremental data into a data warehouse
      Employ best practices for using Oracle features to implement high performance data warehouses

  • Automatic Area of Search through finder?

    By default, every time I try to search through spotlight in finder, it automatically searches "This Mac". I was wondering if there was a way to have it search in whatever folder I am currently in like it did in OS 10.3. Instead it comes up searching my hard drive and I have to manual click on whatever folder I am in. Also, does anyone know how to make it so if you have something typed into spotlight in finder, it will stay in the search box instead of erasing whenever I move to another folder. It makes it very agitating to have to retype in the file name each folder I switch to. This is also another thing that had changed from OS 10.3. If anyone has any way to help me on either of these problems, I would really appreciate it. Thank you.

    If you did this is applescript, you'd need to call unix through do shell script anyway, so don't bother with applescript.  just use mdfind directly.  e.g.:
    # finds files that have a metadata attribute called MyKeyword with value DesiredValue
    mdfind "MyKeyword == DesiredValue"
    # finds files that have a metadata attribute called MyKeyword whose values start with Desi
    mdfind "MyKeyword == 'Desi*'"
    # finds files that have some metadata attribute whose value contains redVal
    mdfind '*redVal*'
    to make a smart folder just do a search in the Finder and then save it to the desktop (careful, it will default to saving it in the sidebar, which will put it in some funky folder down in you library).  once you've saved it you can open it in a text editor or plist editor to see the contained mdfind command.  Then it's just a question of modifying the right keywords.  I have an applescript somewhere that does it, but it's not all that useful - almost as easy to make the smart folder by hand in the Finder, and there's no way to extract the files from the smart folder once you've made it.

Maybe you are looking for

  • Paste is only pasting a text field instead of an image copied to the clipboard from another program like mail or safari?

    The latest version of Keynote only pastes a blank text field after copying an image to the clipboard in other applications like mail or Safari.  The previous version worked fine with copying and pasting images.  Saving the image to a file and then in

  • Photoshop workflow setup

    Hi, I need some help with Photoshop Scripting. What I need to do is to avoid users to create new document or open an existing one (except the one I've loaded). What I have done so far is to insert in the "Script Event Manager" some script for the "op

  • No Acct Document with BAPI_GOODSMVT_CREATE & BDC Recording for MB1A 221Q

    Please see my posting @ Re: BAPI_GOODSMVT_CREATE or BDC for MB1A not creating Accounting Doc It seems that I can do a 221Q direct via MB1A successfully and generate a Material Doc & an Accounting Doc.  I'm not using reference to Reservation / PO, but

  • New Quicklook Won't Play .Mov In Slideshow Mode

    Prior to upgrading to Lion, I always appreciated the functionality of Snow Leopard's Quick Look to be able to highlight multiple files (.jpeg's, .pdf's, and even videos) and integrate them into a seamless slideshow in only a couple of clicks. After u

  • Cannot delete cookies

    I used to be able to delete cookies fine. Now when I go to "remove individual cookies" to see all of them I can't delete them. It seems that if I press "remove all cookies" they're gone, but when I click "remove individual cookies" again, there they