Wordpress big database

Hi,
I'm using wordpress and mysql to build my magazine website ( it is chung khoan ), now the data is very big i saw it more slow then the first time I run.
How can I update my database (mysql) to other like nosql or other? or May i update my mysql to higher version?
Please advice me.
Thank you very much.

These forums are for Oracle databases - pl ask in the MySQL or NoSQL forums

Similar Messages

  • Learning PT by using a big database?

    I have been taught that in order to learn performance tuning, you have to work with big database, containing lots of data, in order to gain experience but you cannot gain knowledge of PT when you work with small database such as advecture works. Is it
    true?

    I have been taught that in order to learn performance tuning, you have to work with big database, containing lots of data, in order to gain experience but you cannot gain knowledge of PT when you work with small database such as advecture works.
    Is it true?
    Hello,
    I somewhat agree to this thought the bigger the database the more complex its operation becomes and o chances of error coming increases.And so your learning.Its not just with performance tuning its with all other aspects.A real time OLTP environment will
    give you real time issues.For example index rebuild of 2 G DB is much simpler than 500 G DB.Same goes of reorganize ,update stats.
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • Dynamic JCombobox for a very big database resulset

    I want to use a JCombobox or similar for selecting values from a big database resultset. I'm using an editable one with SwingX autocomplete decorator. My strategy is:
    * show only first Xs regs.
    * let the user to enter some text in the combobox and refine the search reloading the model.
    Someone have some sample code or know some components that do that.
    Or can point me to some implementation details?�
    A lot of thanks in advance,
    PD:
    I need something efficient that don't query database to much.

    what is the size of the table and how many lines are you going to delete?
    I would recommend you to delete only up to 5000 or 10000 records in one step.
    do 100 times.
    select *
              from
              into table itab.
              where
              up to 10.000 records.
    if ( itab is initial )
      exit.
    endif.
    delete ... from table itab.
    commit work.
    If this is still too slow, than you should create a secondary index with zone.
    You can drop the index after the deletion is finished.
    Siegfried

  • The options to replicate a secondary read-only copy of a big database with limited network connection?

    There is a big database on remote server. A read-only replicate is required on a local server. The data can only be transferred via FTP, etc. It's ok to replicate it once a day.
    Logshipping is an option. However, it need to kill all the connections when doing restoring. What's the other options (pros/cons)? How about merge repl or .Net sync framework?

    Hi
    ydbn,
    Do you need to update data on the local server and propagate those changes to remote server? If no, you can use log shipping or transaction replication achieve your requirement.  It doesn’t need to kill all the connections if you
    clear the Disconnect users in the database when restoring backups check box when configuring log shipping,
    With transaction replication, the benefits are as follows.
    Synchronization. This method can be used to keep multiple subscribers synchronized in real time.
    Scale out. Transactional replication is excellent for scenarios in which read-only data can be scaled
    out for reporting purposes or to enable e-commerce scalability (such as providing multiple copies of product catalogs).
    There are a few disadvantages of utilizing transaction replication, including:
        • Schema changes/failover. Transactional subscribers require several schema changes that impact foreign keys and impose other constraints.
        • Performance. Large-scale operations or changes at the publisher might require a long time to reach subscribers.
    However, if you need to update data on the local server and propagate those changes to remote server, merge replication
     is more appropriate, and it comes with the following advantages:
        • Multi-master architecture. Merge replication does allow multiple master databases. These databases can manage their own copies of data and marshal those changes as needed between other members of
    a replication topology.
        • Disconnected architecture. Merge replication is natively built to endure periods of no connectivity, meaning that it can send and receive changes after communication is restored.
        • Availability. With effort on the part of the developers, merge-replicated databases can be used to achieve excellent scale-out and redundancy options.
    Merge replication comes with some disadvantages, including:
        • Schema changes. Merge replication requires the existence of a specialized GUID column per replicated table.
        • Complexity. Merge replication needs to address the possibility for conflicts and manage operations between multiple subscribers, which makes it harder to manage. For more details, please review this
    article.
    For the option of sync framework, I would like to recommend you post the question in the Sync Framework forums at
    https://social.msdn.microsoft.com/Forums/en-US/home?category=sync . It is appropriate and more experts will assist you. Also you can check this
    article about introduction to Sync Framework database synchronization.
    Thanks,
    Lydia Zhang
    If you have any feedback on our support, please click
    here.
    Lydia Zhang
    TechNet Community Support

  • Optimize delete in a very big database table

    Hi,
    For delete entries in database table i use instruction:
    Delete from <table> where <zone> = 'X'.
    The delete take seven hours (the table is very big and  <zone> isn't an index)
    How can i optimize for reduce the delete time.
    Thanks in advance for your response.
    Regards.

    what is the size of the table and how many lines are you going to delete?
    I would recommend you to delete only up to 5000 or 10000 records in one step.
    do 100 times.
    select *
              from
              into table itab.
              where
              up to 10.000 records.
    if ( itab is initial )
      exit.
    endif.
    delete ... from table itab.
    commit work.
    If this is still too slow, than you should create a secondary index with zone.
    You can drop the index after the deletion is finished.
    Siegfried

  • What are solutions for a way-too-big database?

    Hi guys!
    I'm a software developer and not very good in database designing. One day, I was asked something like this :
    "For example, there is a company with a web application. One day, the database for that application is way too big, caused performance issues and others, what is the solution for that application database?"
    At first, I thought that was about using multiple database with single app. But I don't know if I was right.
    I want to ask that what are the solutions? If it's "multiple database" then what should I do? Using two connection to 2 database simutaneously?
    I appreciate any replies. Thanks!

    847617 wrote:
    Thanks Lubiez Jean-Val... for your links.
    I've got some more advices like :
    - "transferring workload to another database using different techniques to copy the data from original db"
    - "redesign of the database"
    So that means we use 2 different databases?Sometimes it is deemed desirable to keep only fairly recent data on the OLTP database, where the normal transaction activity happens, and replicate the data to another database that also contains historical data. This second database is used for heavy reporting tasks.
    And "redesign"?As in, design it from scratch and do it right this time. Make sure all data relations are properly defined to Third Normal Form; make sure all data is typed properly (use DATE columns for dates, NUMBER columns for numbers, etc); make sure you have designed effective indexing; make sure you use the capabilities of the rdbms and do NOT just use it as a data dump.
    See http://www.amazon.com/Effective-Oracle-Design-Osborne-ORACLE/dp/0072230657/ref=sr_1_3?s=books&ie=UTF8&qid=1301257486&sr=1-3
    are they really good solutions?Like most everything else, "It depends"
    It depends on if the proposed solutions are implemented properly and address the root problem. The root problem (or even perceived problem) hasn't yet been defined. You've just assumed that at some undefined point the database becomes "way-too-big" and will cause some sort of problem.
    It's assumed that we don't have or can't use partitioning.
    And why is that assumed? Yes, you have to have a version of Oracle that supports it, and it is an extra cost license. But like everything else, you and your management have to do a hard-nosed cost/benefit analysis. You may think you can't afford the cost of implementing partitioning, but it may be that you can't afford the expenses derived from NOT implementing it. I don't know what the case is for you, but you and your management should consider the factors instead of just rejecting in out of hand.
    :):)...You are making me - a student so excited about the history. From slides rule to the moon....
    Edited by: 847617 on Mar 27, 2011 10:01 AMEdited by: EdStevensTN on Mar 27, 2011 3:24 PM

  • Recommended way to start a merge replication with big database

    Hi all
    I need to install a merge replication on 2 diferent stores with Sql 2012 server, that are connected via 2 mb vpn and the database is about 4gb. due to the fact we have 2mb for the initial sync and the database is big, What is the recommended way to do that
    without using the snapshop agent step? Can I take a backup of the db and restore it in the second server and setup the merge replication ? If so, where I tell the wizards that the databases are already there and do not use the snapshop agent and just start
    to replicate ?
    Thanks in advance.
    James

    Create the publication and snapshot. Zip up the snapshot and send it via FedEx to the subscriber. Apply the snapshot on the subscriber by pointing to the unzipped snapshot using the altSnapshotFolder parameter of the merge agent.
    looking for a book on SQL Server 2008 Administration?
    http://www.amazon.com/Microsoft-Server-2008-Management-Administration/dp/067233044X looking for a book on SQL Server 2008 Full-Text Search?
    http://www.amazon.com/Pro-Full-Text-Search-Server-2008/dp/1430215941

  • Export very big database.

    Hello,
    I have 9i database on Linux. This database is very big in tera bytes.I want to shift the database to other server.
    export backup is taking too much time. Can you please suggest me how can i shift my database in few hours?
    Thanks in advance.
    Anand.

    Tricky. Especially since you don't say if the new server is running Linux, too. And you also don't say (which makes a big difference) if the new server will be running 10g.
    But you might be able to do a transportable tablespace migration. That involves exporting only the contents of your existing data dictionary (a matter of a few minutes at most, usually); copying the DBF files to the new server; and then plugging them in by importing your data dictionary export. The major time factor in that lot is the physical act of copying the datafiles between servers. But at least you're not extracting terabytes of data and then trying to re-insert the same terabytes!
    If your new server is not running Linux, forget it, basically, because cross-platform tablespaces are only do-able in 10g and with lots of restrictions and caveats (but you might get lucky... you'd have to read tahiti.oracle.com to find out if you could get away with it).
    If your new server is running 10g, you're also going to be in for tricky times, though it's not impossible to transport between 9i and 10g. Easiest thing, if possible, is to create your 10g database with COMPATIBLE set to 9.x.x, do the transport and then increase your compatible parameter afterwards.

  • How can I support big database in redHat linux?

    I know in berkeley db, a database is like a table. But if I want to put must data into the table, the database file would bypass 2 gigabytes, which is not supported by the linux os. Is there a method to this?

    Yes, OS does not support bigger file then 2G.
    Why not berkeley DB use many different files to save
    a single database automatically? If so, when a first
    file's size if bypass the support of the OS, berkeley
    DB will aotomatically create another big file.
    ShuanghuaHi Shuanghua,
    This feature request has not come up since most file systems can support files that exceed 2 GB. Since Linux has had this support for years, I suggest you look into turning that option on or going with a Kernel version/File system that supports files that exceed 2GB.
    If you can come up with a good reason for this feature that would be useful to others than we will consider it for a future release. With the information you provided thus far, it seems like you should simply change your file system/Kernel so you can get around the restriction.
    Ron Cohen

  • CSM 3.3.1 : Big database

    Hi there,
    is there any way to shrink a CSM 3.3.1 Database ?
    I have a 3 gb DB and it causes problem for replication
    thanks for your help

    You can do is go to Tools > CSM Admin. Under Archived you can go and delete the archives of all your devices and only leave the last 10 for example.
    That should significantly decrease the disk used.
    I hope it helps.
    PK

  • Big database question

    Dear all,
    please help in such a problem - we have a database, total size on disc is about 39 Gb, 2 dense dimensions, 6 sparse dimensions, loaded weekly and monthly. Load file size is about some tens Mb at the begin of the month and about 600 Mb at the end of the month (~ 1.5 million lines). Last time when monthly loading was running, dimension build was done successfully, but data import failed. The reason was that Essbase created one additional data file (.pag) of 2G size and tried to create another data file but free space on the disc was over. Yes, it wasn't too much free space on disc but never before Essbase needed so much free space. It looks a bit strange that for 600 Mb load file Essbase create some 2Gb data file. Please give me some tips - is this behaviour normal and what shoud i do to improve situation. Essbase version 7.1.2 .
    Thank you.
    Natalia.

    Hi All,
    Can someone help me to resolve this essbase server error :
    Error(1270041) : Sort operation ran out of memory. Please increase the size of aggregate storage cache
    I am trying to extract data of 1.5 GB from an ASO application with Essbase server v9.3.0. I tried to increase Data retrieval buffers i.e. buffer size and sort buffer size from 10 KB to 100000 KB which is maximum. I also tried to set VLBREPORT TRUE in essbase config file. All in vain. Server gives same error. Please help me how to resolve this issue. I am getting this error while extracting 1 GB of data as well as 16 MB of data too.
    While extracting 16 MB data sometimes I get this error also :
    "Error 1001200 - Report error. Not enough memory to continue processing."
    Please let me know how to resolve these errors.
    Thanks
    Prakash

  • Enterprise system for big oracle database and datawarehouse

    Hi
    i wana to implemnet System for Big Oracle Database that increase rapidly(monthly by 20 GB).
    what is the best and specification of the required servers.
    what is the best way to implemnet backup for my big database.
    what is required server to implement Datawarehouse.
    what others servers i need to implement applications that will interact with the big oracle database and Data Warehouse.
    Best Regards,
    Alaa

    Hi
    i wana to implemnet System for Big Oracle Database that increase rapidly(monthly by 20 GB).
    what is the best and specification of the required servers.
    what is the best way to implemnet backup for my big database.
    what is required server to implement Datawarehouse.
    what others servers i need to implement applications that will interact with the big oracle database and Data Warehouse.
    Best Regards,
    Alaa

  • Takes too long to open a database

    Dear experts,
    we are trying to load a lot of data into BDB JE 4.1.10, and at the moment total size of all log files is about 306Gb (~ 55mln records with keys length 15-20 bytes and values 10-30KB), total number of files is 1570.
    Usage pattern: big batch updates once a week, all other time read-only with random reads.
    So far the experience of using BDB JE is very good, but one annoying thing is that it takes ~ 30 min to open the database (new Environment(...) takes only 10 seconds, but dbEnv.openDatabase(...) is very slow). It is slow even after a clean shutdown, and even when there were previously no updates at all. Looking at the iostat, the application is very busy with reading from disk.
    Is there a way to speed opening the database up? Or may be it is a reasonable time for such a big database?
    Settings:
    envConfig.setConfigParam(EnvironmentConfig.LOG_FILE_MAX, "" + (200 * MB));
    envConfig.setConfigParam(EnvironmentConfig.LOG_FILE_CACHE_SIZE, "500");
    envConfig.setConfigParam(EnvironmentConfig.LOG_WRITE_QUEUE_SIZE, "" + (24 * MB));
    envConfig.setConfigParam(EnvironmentConfig.LOG_BUFFER_SIZE, "" + (8 * MB));
    envConfig.setConfigParam(EnvironmentConfig.CHECKPOINTER_BYTES_INTERVAL, "" + (100 * MB));
    envConfig.setConfigParam(EnvironmentConfig.CLEANER_MIN_AGE, "10");
    envConfig.setConfigParam(EnvironmentConfig.CLEANER_MAX_BATCH_FILES, "10");
    envConfig.setConfigParam(EnvironmentConfig.CLEANER_READ_SIZE, "" + (4 * MB));
    envConfig.setConfigParam(EnvironmentConfig.CLEANER_LOOK_AHEAD_CACHE_SIZE, "" + (2 * MB));
    envConfig.setConfigParam(EnvironmentConfig.LOG_FAULT_READ_SIZE, "" + (24 * KB));
    envConfig.setConfigParam(EnvironmentConfig.LOG_ITERATOR_READ_SIZE, "" + (128 * KB));
    envConfig.setConfigParam(EnvironmentConfig.EVICTOR_LRU_ONLY, "false");
    envConfig.setConfigParam(EnvironmentConfig.EVICTOR_NODES_PER_SCAN, "200");
    System: SUSE with 32Gb onboard and 2CPUs with 4 cores each, RAID0 on 4 disks 7200rpm. BDB JE has 75% for cache from 10GB available for the application.
    Thanks in advance!

    Hi ambber,
    Thanks for your suggestion. A counter that must be updated atomically and logged periodically by JE would be accurate, but would add the same sort of complexity to JE that you mention was added to your application. More importantly, it would only address one type of aggregate "count" that an application may be interested in. Other applications, for example, need to find the number of records in a key range, where the range end points are not predetermined.
    Another possibility is for JE to provide a very rough estimate of the record count, based on the depth of the Btree and the number of internal nodes below the root level of the tree. For example, if the tree is 5 levels deep, the tree has N nodes under the root level, 128 is the maximum size of each node, and nodes are on average 75% full, then the total number of records is very roughly:
    N * ((128 * 0.75) ^^ 4)The advantage of this approach is that it requires no additional data to be logged, does not reduce concurrency, and a similar approach can be applied to a key range as well the entire database.
    Because it would be a very rough estimate, it would mainly be useful for determining optimal query strategies.
    How are you using the record count?
    Just a thought.
    --mark                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Ordimage performances big issue, please help

    Hi, we are developing a visual retrieval application based on Intermedia.
    The Database is Oracle 10g (10.1.0.4)
    The Database is a RAC(Real Application Cluster) running on 2 Servers
    The storage is a 2TB SAN
    Each server is a quad-CPU (Itanium 2, 3GHz) with 8GB of ram each, the OS is Windows 2003 Server 64-bit Edition
    one table of the db contains the ordimagesignature objects, we created a tablespace for the table itself and 2 other tablespace, one for the ordsignature filter and one for the ordsignature index.
    This is the creation script for the index:
    CREATE INDEX APP_IMG_SIGNIDX ON APP_IMG
    (SIGNATURE)
    INDEXTYPE IS ORDSYS.ORDIMAGEINDEX
    PARAMETERS('ORDImage_Filter_Tablespace=TPC00F,ORDImage_Index_Tablespace= TPC00I');
    TPC00F and TPC00I are 1GB each.
    the tablespaces TPC00F and TPC00I are stored in the SAN and are dedicated only to ordimage data.
    The table contains almost 300K rows.
    Running a query like this (and no other activities on the server):
    SELECT a.id_img, ORDSYS.imgscore (123)
    FROM app_img a, app_img b
    WHERE ORDSYS.imgsimilar (a.signature, b.signature, 'color="0.20" texture="0.80"', 5, 123) = 1
    AND b.id_img = 2377165
    ORDER BY ORDSYS.imgscore (123) ASC;
    Can take from 3 to 7 minutes, depending on the ordimage parameters I choose.
    This is obviously an unacceptable time for the hardware we have. Monitoring the performances of the machine during the execution of the query, shows that the CPU is almost idle while the IO queue is almost at 90% for all the running time of the query. We also tried altering the table with the command:
    alter table app_img parallel (Degree 8);
    but the execution time is the same.
    ANY help would be very well accepted, thank you in advance.
    Best Regards, Stefano

    Have You still problem with the performance ?
    I'm facing the same performance problems, i'm just testing a small amount of images from a big database. I have loaded (only) 5500 images into table, and generated signatures for them.
    i have created a ORDSYS.ORDImageIndex on the signature, and analyzed that to.
    After loaded > 2000 images the performance going down, and with (only) 5500 images
    i get time on queries with IMGSimilar between 20 - 30 sec. . . .
    ps, the total amount of images in the database is over 5 millions, bu tonly 5500 in table img_ordimage (so far)...
    Oracle version : 10g Enterprise Edition Release 10.1.0.4.0 - 64bit Production
    OS : SunOS storea 5.9 Generic_118558-30 sun4u sparc SUNW,Sun-Fire-V210
    the, output from tkprof:
    oracle@storea 10:24 ~/admin/storedb/udump >cat tkprof.out
    TKPROF: Release 10.1.0.4.0 - Production on Fri Aug 24 10:24:12 2007
    Copyright (c) 1982, 2004, Oracle. All rights reserved.
    Trace file: storedb_ora_27804.trc
    Sort options: default
    count = number of times OCI procedure was executed
    cpu = cpu time in seconds executing
    elapsed = elapsed time in seconds executing
    disk = number of physical reads of buffers from disk
    query = number of buffers gotten for consistent read
    current = number of buffers gotten in current mode (usually for update)
    rows = number of rows processed by the fetch or execute call
    alter session set sql_trace true
    call count cpu elapsed disk query current rows
    Parse 0 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 1 0.00 0.00 0 0 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 58
    select metadata
    from
    kopm$ where name='DB_FDO'
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 2 0 1
    total 3 0.00 0.00 0 2 0 1
    Misses in library cache during parse: 0
    Optimizer mode: CHOOSE
    Parsing user id: SYS (recursive depth: 1)
    Rows Row Source Operation
    1 TABLE ACCESS BY INDEX ROWID KOPM$ (cr=2 pr=0 pw=0 time=136 us)
    1 INDEX UNIQUE SCAN I_KOPM1 (cr=1 pr=0 pw=0 time=67 us)(object id 350)
    select u.name, o.name, a.interface_version#
    from
    association$ a, user$ u, obj$ o where a.obj# = :1
    and a.property = :2
    and a.statstype# = o.obj# and
    u.user# = o.owner#
    call count cpu elapsed disk query current rows
    Parse 5 0.00 0.00 0 0 0 0
    Execute 5 0.00 0.00 0 0 0 0
    Fetch 5 0.00 0.00 0 20 0 1
    total 15 0.00 0.00 0 20 0 1
    Misses in library cache during parse: 0
    Optimizer mode: CHOOSE
    Parsing user id: SYS (recursive depth: 1)
    Rows Row Source Operation
    0 NESTED LOOPS (cr=3 pr=0 pw=0 time=689 us)
    0 NESTED LOOPS (cr=3 pr=0 pw=0 time=649 us)
    0 TABLE ACCESS FULL ASSOCIATION$ (cr=3 pr=0 pw=0 time=645 us)
    0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us)
    0 INDEX UNIQUE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us)(object id 36)
    0 TABLE ACCESS CLUSTER USER$ (cr=0 pr=0 pw=0 time=0 us)
    0 INDEX UNIQUE SCAN I_USER# (cr=0 pr=0 pw=0 time=0 us)(object id 11)
    select a.default_cpu_cost, a.default_io_cost
    from
    association$ a where a.obj# = :1
    and a.property = :2
    call count cpu elapsed disk query current rows
    Parse 3 0.00 0.00 0 0 0 0
    Execute 3 0.00 0.00 0 0 0 0
    Fetch 3 0.00 0.00 0 9 0 0
    total 9 0.00 0.00 0 9 0 0
    Misses in library cache during parse: 0
    Optimizer mode: CHOOSE
    Parsing user id: SYS (recursive depth: 1)
    Rows Row Source Operation
    0 TABLE ACCESS FULL ASSOCIATION$ (cr=3 pr=0 pw=0 time=633 us)
    select a.default_selectivity
    from
    association$ a where a.obj# = :1
    and a.property = :2
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 1 0.00 0.00 0 3 0 0
    total 3 0.00 0.00 0 3 0 0
    Misses in library cache during parse: 0
    Optimizer mode: CHOOSE
    Parsing user id: SYS (recursive depth: 1)
    Rows Row Source Operation
    0 TABLE ACCESS FULL ASSOCIATION$ (cr=3 pr=0 pw=0 time=2021 us)
    declare
    cost sys.ODCICost := sys.ODCICost(NULL, NULL, NULL, NULL);
    obj1 "ORDSYS"."ORDIMAGESIGNATURE" := "ORDSYS"."ORDIMAGESIGNATURE"(NULL);
    obj2 "ORDSYS"."ORDIMAGESIGNATURE" := "ORDSYS"."ORDIMAGESIGNATURE"(NULL);
    begin
    :1 := "ORDSYS"."ORDIMAGEINDEXSTATS".ODCIStatsIndexCost(
    sys.ODCIINDEXINFO('STORE',
    'IMGORDINDEX2',
    sys.ODCICOLINFOLIST(sys.ODCICOLINFO('STORE', 'IMG_ORDIMAGE', '"SIGN"', 'ORDIMAGESIGNATURE', 'ORDSYS', NULL)),
    NULL,
    0,
    0),
    NULL,
    cost,
    sys.ODCIQUERYINFO(2,
    sys.ODCIOBJECTLIST(sys.ODCIOBJECT('IMGSCORE', 'ORDSYS'))),
    sys.ODCIPREDINFO('ORDSYS',
    'IMGSIMILAR',
    NULL,
    141),
    sys.ODCIARGDESCLIST(sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(2, 'IMG_ORDIMAGE', 'STORE', '"SIGN"', NULL, NULL, NULL), sys.ODCIARGDESC(2, 'IMG_ORDIMAGE', 'STORE', '"SIGN"', NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
    :6,
    :7
    , obj2, :8, :9,
    sys.ODCIENV(:10,:11,:12,:13));
    if cost.CPUCost IS NULL then
    :2 := -1;
    else
    :2 := cost.CPUCost;
    end if;
    if cost.IOCost IS NULL then
    :3 := -1;
    else
    :3 := cost.IOCost;
    end if;
    if cost.NetworkCost IS NULL then
    :4 := -1;
    else
    :4 := cost.NetworkCost;
    end if;
    :5 := cost.IndexCostInfo;
    exception
    when others then
    raise;
    end;
    call count cpu elapsed disk query current rows
    Parse 1 0.01 0.01 0 0 0 0
    Execute 0 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 1 0.01 0.01 0 0 0 0
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 58 (recursive depth: 1)
    SELECT IMG_CD.image_id ,ORDSYS.IMGScore(123) score
    FROM IMG_CD ,img_ordimage Q, img_ordimage S
    WHERE Q.image_id = 11992231 AND
    ORDSYS.IMGSimilar(S.sign,Q.sign,' color="0,83",shape="0,17",location="0,26"',20,123)=1
    AND S.image_id = img_cd.image_id order by score
    call count cpu elapsed disk query current rows
    Parse 1 0.18 0.38 0 702 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 30 11.42 19.61 2568 4049 2217 432
    total 32 11.60 19.99 2568 4751 2217 432
    Misses in library cache during parse: 1
    Optimizer mode: ALL_ROWS
    Parsing user id: 58
    Rows Row Source Operation
    432 SORT ORDER BY (cr=11107 pr=2568 pw=0 time=21310948 us)
    432 NESTED LOOPS (cr=11107 pr=2568 pw=0 time=21156098 us)
    432 NESTED LOOPS (cr=10241 pr=2568 pw=0 time=21141805 us)
    1 TABLE ACCESS BY INDEX ROWID IMG_ORDIMAGE (cr=4 pr=0 pw=0 time=220 us)
    1 INDEX UNIQUE SCAN IMGORDIDXID (cr=2 pr=0 pw=0 time=63 us)(object id 200937)
    432 TABLE ACCESS BY INDEX ROWID IMG_ORDIMAGE (cr=10237 pr=2568 pw=0 time=21140720 us)
    432 DOMAIN INDEX IMGORDINDEX2 (cr=9848 pr=2568 pw=0 time=21125137 us)
    432 INDEX UNIQUE SCAN IMG_CD_PK (cr=866 pr=0 pw=0 time=11411 us)(object id 48686)
    select T."SIGN".signature,T.rowid
    from
    STORE.IMG_ORDIMAGE T where T.rowid in (select P.orig_rowid from
    STORE.IMGORDINDEX2_FT$ P WHERE A1 BETWEEN 0 AND 100 AND A2 BETWEEN 0
    AND 100 AND A3 BETWEEN 0 AND 100 AND A4 BETWEEN 0 AND 100 AND A5
    BETWEEN 0 AND 100 AND A6 BETWEEN 0 AND 100 AND A7 BETWEEN 0 AND 100
    AND A8 BETWEEN 0 AND 100 AND A9 BETWEEN 0 AND 100 AND A10 BETWEEN 0 AND
    100 AND A11 BETWEEN 0 AND 100 AND A12 BETWEEN 0 AND 100 AND A13
    BETWEEN 0 AND 100 AND A14 BETWEEN 0 AND 100 AND A15 BETWEEN 0 AND 100
    AND A16 BETWEEN 0 AND 100 AND A17 BETWEEN 0 AND 100 AND A18 BETWEEN 0
    AND 100 AND A19 BETWEEN 0 AND 100 AND A24 BETWEEN 0 AND 100 AND A25
    BETWEEN 0 AND 100 AND A26 BETWEEN 0 AND 100 AND A27 BETWEEN 0 AND 100
    AND A28 BETWEEN 0 AND 100 AND A29 BETWEEN 0 AND 100 AND A30 BETWEEN 0
    AND 100 AND A31 BETWEEN 0 AND 100 AND A32 BETWEEN 0 AND 100 AND A33
    BETWEEN 0 AND 100 AND A34 BETWEEN 0 AND 100 AND A35 BETWEEN 0 AND 100
    AND A36 BETWEEN 0 AND 100 AND A37 BETWEEN 0 AND 100 AND A38 BETWEEN 0
    AND 100 AND A39 BETWEEN 0 AND 100 AND A40 BETWEEN 0 AND 100 AND A41
    BETWEEN 0 AND 100 AND A42 BETWEEN 0 AND 100 AND A43 BETWEEN 0 AND 100
    AND A44 BETWEEN 0 AND 100 AND A45 BETWEEN 0 AND 100 AND A46 BETWEEN 0
    AND 100 AND A47 BETWEEN 0 AND 100 AND A48 BETWEEN 0 AND 100 AND A49
    BETWEEN 0 AND 100 AND A50 BETWEEN 0 AND 100 AND A51 BETWEEN 0 AND 100
    AND A52 BETWEEN 0 AND 100 AND A53 BETWEEN 0 AND 100 AND A54 BETWEEN 0
    AND 100 AND A55 BETWEEN 0 AND 100 AND A56 BETWEEN 0 AND 100 AND A57
    BETWEEN 0 AND 100 AND A58 BETWEEN 0 AND 100 AND A59 BETWEEN 0 AND 100
    AND A60 BETWEEN 0 AND 100 AND A61 BETWEEN 0 AND 100 AND A62 BETWEEN 0
    AND 100 AND A63 BETWEEN 0 AND 100 )
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 5290 0.64 1.69 0 7058 0 5289
    total 5292 0.64 1.70 0 7058 0 5289
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 58 (recursive depth: 1)
    alter session set sql_trace false
    call count cpu elapsed disk query current rows
    Parse 1 0.00 0.00 0 0 0 0
    Execute 1 0.00 0.00 0 0 0 0
    Fetch 0 0.00 0.00 0 0 0 0
    total 2 0.00 0.00 0 0 0 0
    Misses in library cache during parse: 0
    Optimizer mode: ALL_ROWS
    Parsing user id: 58
    OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 2 0.18 0.38 0 702 0 0
    Execute 3 0.00 0.00 0 0 0 0
    Fetch 30 11.42 19.61 2568 4049 2217 432
    total 35 11.60 19.99 2568 4751 2217 432
    Misses in library cache during parse: 1
    OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS
    call count cpu elapsed disk query current rows
    Parse 12 0.01 0.01 0 0 0 0
    Execute 11 0.00 0.00 0 0 0 0
    Fetch 5300 0.64 1.70 0 7092 0 5291
    total 5323 0.65 1.72 0 7092 0 5291
    Misses in library cache during parse: 1
    5 user SQL statements in session.
    10 internal SQL statements in session.
    15 SQL statements in session.
    Trace file: storedb_ora_27804.trc
    Trace file compatibility: 10.01.00
    Sort options: default
    1 session in tracefile.
    5 user SQL statements in trace file.
    10 internal SQL statements in trace file.
    15 SQL statements in trace file.
    9 unique SQL statements in trace file.
    5535 lines in trace file.
    34 elapsed seconds in trace file.
    oracle@storea 10:24 ~/admin/storedb/udump >
    Message was edited by:
    user591430

  • Dreamweaver CC causes wordpress localhost to show a blank page

    I have wordpress set  up using WAMP as the local server and it has been working with no problem. Win7 64 bit.
    I have just installed Dreamweaver CC and set up testing sites using HTTP://localsite/sitename for several 'normal' website I manage and that was fine i could see them in live view.
    I then added a local test site to view the website I manage under wordpress and when I went to live view all i could see was a blank page.
    I then closed dreamweaver and just opened my local wordpress site using HTTP://localhost/wordpress and that too just showed a blank page.
    I have no problem getting into my wordpress site via the local dashboard but cannot view any of the pages.
    I searched the web and tried lots of workarounds but nothing worked. in the end i uninstalled worpress, deleted the wp databse and reinstalled everything with no problem.
    Localhost was working with wrodpress again. However as soon as I tried to access the local wordpress site with dreamweaver the problem returned.
    This is very frustrating as recreating a wordpress site is not as simple as just copying a load of HTML and PHP pages.
    I should add there are no errors shown.
    Anyone any ideas?

    Hi Nancy,
    thanks for taking the time to reply but unfortunately it does not help.
    I have my wordpress files and themes in the correct directories for WAMP. I have been using WAMP to test my wordpress site for some time but opening up the PHP files in brackets to write the code. I have several non wordpress websites I maintain which are written in normal HTML, PHP and CSS and I like to code manually. It is useful tough to write the code in Brackets / Dreamweaver to check syntax and adjust the CSS on the fly.
    Everything was working fine until I installed dreamweaver and set up a testing site as you indicate above and now all I get is a blank page not only in live view in dreamweaver but also if I simply type HTTP://loclahost/worpress directly into a browser window. There are no error meesages though not even a 404 'NOT Found' message, just a blank page. I can get into the dashborad though by typing http://localhost/wordpress/wp-admin/ and all my pages and content is there but if I try to view a page all i get is the same blank page.
    As I said I said I totally unistalled wordpress including the database and reinstalled and all worked fine. I could see the site using http://localhost/wordpress/ but as soon as I set up a testing server in dreamweaver the problem returned. Dreamweaver must be altering something in the wordpress files / database to cause this problem.
    Has no on else come across this?

Maybe you are looking for