Database capture on 10.2.0.4 to be replayed on 11g

Hi!
I have browsed the net for a while now and I was wondering how you folks replay a workload captured on 10.2.0.4 on 11g. The capturing part is rather simple :)
When it comes to replaying, do you data pump your schema from 10.2.0.4 to 11g and replay the workload there? I somehow can't see a different way - a complete database migration of the 10.2.0.4 db in the 11g home seems a bit off the mark. But then, that would be a more realistic scenario. IMO database replay really shines when it comes to large applications (not peoplesoft, a tad smaller) consisting of many schemas and that'll be some kind of effort to data pump....
Any thoughts?
Cheers,
Martin

user12009184 wrote:
Hi,
My current database version is 10.2.0.4
OS platform AIX 5.3
I want to upgrade my 10g database to latest version of 11g on AIX 5.3 platform.
Kindly help me from where i can download 11g base dump & latest 11g version patch to upgrade the database.
What is the latest version of 11g ?
What steps i need to follow ..?
Regards,
Pravin
The latest version of 11g is 11.2.0.2.3
go to metalink and search patch 10098816 for 11.2.0.2.0 and patch 12419331 to patched it to 11.2.0.2.3
Hope this helps
Cheers

Similar Messages

  • SQLStateMapping.java:70 Error When Loading Database Capture Script Output

    I'm running "Migration->Third Party Database Offline Capture->Load Database Capture Script Output" (Sybase 12) (SQLDeveloper 1.5.5)
    After Tables are loaded (16000+ tables), I'm getting the following error in Migration Log:
    Error ocurred during capture: In Columns for <column_name>
    oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:70)
    I could not find any hits that match this. What's the best method to troubleshoot this?

    Log a service request with the offline scripts and we can check them out and forward them also to Development if needed.

  • IGNORE - Fixed Single-Database Capture and Apply Example, multiple tables

    I ran the demo setup for , "Single-Database Capture and Apply Example" ( http://download.oracle.com/docs/cd/B28359_01/server.111/b28321/capappdemo.htm#BCGBIEDJ )
    Everything works fine. I now want to capture changes to another table in the HR schema, using the same DML handler.
    I am uncertain of what to use for the stream name in the following (and related calls). I want the captures for the new table, "job_History" to go to the same queue. Do I use the same stream name, or do you have to create a new stream for each table capture? Thanks.
    BEGIN
    DBMS_STREAMS_ADM.ADD_TABLE_RULES(
    table_name => 'hr.job_history,
    streams_type => 'capture',
    streams_name   => 'capture_emp', queue_name => 'strmadmin.streams_queue',
    include_dml => TRUE,
    include_ddl => FALSE,
    inclusion_rule => TRUE);
    END;
    Edited by: user614224 on Nov 15, 2010 6:54 AM

    Please edit the subject of this post and change it to "Please Ignore." Then post your question, with full version information, to the Streams forum.

  • Single database capture and apply example

    I trying this example (with some minor changes) from the 9.2 streams documentation chapter 20. I have this message in dba_apply_error (the strings are the local and source transaction ids)
    9.0.362 1.21.364
    ORA-00942: table or view does not exist
    Is there a way to find out more about the error. Are there any other tables that I could be missing, besides the two that the example uses?
    Thanks
    chaim

    Solved. (the streamadm user only had privileges on the source tables through a role. Once direct grants were in place it worked well)
    Thanks

  • How to capture Field validation errors in the Error table in ODI 11g

    Hello,
    We are using ODI 11g (11.1.1.5) and the scenario is to read the data from a flat file (.txt) and do a bulk insert into MS SQL Server database table.
    We need to capture the error records (if the source field size is greater than the target column size) into the error table. However the interface errors out at step "Loading - SrcSet0 - Load data (BULK INSERT)" with error message "SQLServer JDBC Driver][SQLServer]Bulk load data conversion error (truncation) for row 33, column 6" but these errors are not being inserted into the error table.
    Is there a way to capture these errors in the error table? Below is the KM details.
    LKM: LKM File to MSSQL (BULK)
    CKM: CKM SQL
    IKM: IKM MSSQL Incremental Update
    FLOW_CONTROL is set to true for the IKM.
    Thanks,
    Krishna

    Hello,
    I had the same problem with ODI when I was trying BULK INSERT of the txt file into MS SQL. Check the cell(s) in your source file (txt) - it looks like the value in hte cell has hiding symbols: when pressing F2 tryng edit the value in the cell the coursor appared far to the right from the right end of the value. So, try to use backspace to delete the hiding symbols and verify the above. If avasrything is OK, then modify your txt file. Let me know if it works.
    BTW , I've created procedure inside the MS SQL 2008R2, which BULK INSERTed records into temporary (#...) table and immediatelly, without any verification all the records were inserted into the final table in the DWH here is the statement:
    if object_id('TEMPDB..#<table>','U') is not null drop table #<table>
    CREATE TABLE [dbo].[#<table>] 
    [1] [varchar] (50) NULL, 
    [2] [varchar] (100) NULL, 
    [3] [varchar] (100) NULL, 
    [4] [varchar] (100) NULL, 
    [5] [varchar] (100) NULL, 
    [6] [varchar] (100) NULL, 
    [7] [varchar]  (100) NULL, 
    [8] [varchar] (100) NULL, 
    [9] [varchar] (100) NULL, 
    [10] [varchar] (100) NULL, 
    [11] [varchar] (100) NULL 
    ) ON [PRIMARY]
    bulk INSERT #<table> FROM 'N:\<table>.txt'
    with
    (FIRSTROW=2,KEEPNULLS,CODEPAGE=1252,FIELDTERMINATOR='\t'
    INSERT
    INTO <table>
    SELECT
    * FROM #<table>
    and it works! Let me also know if you find any other way around.
    regards
    Anatoli

  • Database host name changed, what needs to be configured in obiee 11g

    Hi,
    we recently changed database Hostname. What are the next steps to configure in obiee? when i login to oracle enterprise manager, all the services are down?
    Please any help is appreciated .
    Thanks,
    Ashwini K

    Check this Rittman Mead Consulting &amp;raquo; Blog Archive &amp;raquo; Oracle BI EE 11g – Managing Host Name Changes

  • Capturing data from a select many checkbox using adf components (11g)

    Hi
    I am new to this technology. Can anybody tell me how to capture the data selected in a checkbox
    I have created the checkbox(Af:selectmanycheckbox) using a webservice.
    Thanks,
    Tim

    Hi
    @Jonas : Thanks for the reply
    In the link its setting the values using a backing bean method lyk this
    <af:selectManyCheckbox label="Locations" id="smc1"
    value="#{sessionScope.weatherBean.locationsSelected}">
    <f:selectItems value="#{sessionScope.weatherBean.locationSelectItems}"
    id="si1"/>
    </af:selectManyCheckbox>
    but in my case am populating the values in the checkbox using a webservice
    <af:selectManyCheckbox value="#{bindings.result.inputValue}" --- result is the list binding while adding a webservice to the data control
    label="result" id="smc1">
    <f:selectItems value="#{bindings.result.items}" id="si1"/>
    </af:selectManyCheckbox>
    So for now am getting values like result red,blue green
    now if i want to select red and blue and display it in a text field on the click of a command button.*pls guide me how to save the selected value and display multiple values in a input/outpt text field*

  • Which database driver is required for weblogic 10.3 and Oracle DB 11g both on MS2008 separate server

    Hi,
    i am trying to configure JDBC with weblogic. Can any one tell me which deriver needs to be selected for weblogic 10.3 and Oracle DB 11g both on MS2008 separate server.
    if i use BEA oracle Driver (Type 4) version 9.0.1, 9.2.0,10,11,  i find error (see snap:2)
    Connection test failed.
    [BEA][Oracle JDBC Driver]Error establishing socket. Unknown host: hdyhtc137540d<br/>weblogic.jdbc.base.BaseExceptions.createException(Unknown Source)<br/>weblogic.jdbc.base.BaseExceptions.getException(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.makeConnectionHelper(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.makeConnection(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.connectAndAuthenticate(Unknown Source)<br/>weblogic.jdbc.oracle.OracleImplConnection.open(Unknown Source)<br/>weblogic.jdbc.base.BaseConnection.connect(Unknown Source)<br/>weblogic.jdbc.base.BaseConnection.setupImplConnection(Unknown Source)<br/>weblogic.jdbc.base.BaseConnection.open(Unknown Source)<br/>weblogic.jdbc.base.BaseDriver.connect(Unknown Source)<br/>com.bea.console.utils.jdbc.JDBCUtils.testConnection(JDBCUtils.java:505)<br/>c om.bea.console.actions.jdbc.datasources.createjdbcdatasource.CreateJDBCDataSource.testConn ectionConfiguration(CreateJDBCDataSource.java:369)<br/>sun.reflect.GeneratedMethodAccessor 826.invoke(Unknown Source)<br/>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl. java:25)<br/>java.lang.reflect.Method.invoke(Method.java:597)<br/>org.apache.beehive.netui .pageflow.FlowController.invokeActionMethod(FlowController.java:870)<br/>org.apache.beehiv e.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)<br/>org.ap ache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)<br/>or g.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java :306)<br/>org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336 )<br/>...
    and
    when i use oracle's driver (thin) version 9.0.1, 9.2.0,10,11, i find error
    Connection test failed.
    Io exception: The Network Adapter could not establish the connection<br/>oracle.jdbc.driver.SQLStateMapping.newSQLException(SQLStateMapping.java:10 1)<br/>oracle.jdbc.driver.DatabaseError.newSQLException(DatabaseError.java:112)<br/>oracle .jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:173)<br/>oracle.jdbc.drive r.DatabaseError.throwSqlException(DatabaseError.java:229)<br/>oracle.jdbc.driver.DatabaseE rror.throwSqlException(DatabaseError.java:458)<br/>oracle.jdbc.driver.T4CConnection.logon( T4CConnection.java:411)<br/>oracle.jdbc.driver.PhysicalConnection.<init>(PhysicalConnectio n.java:490)<br/>oracle.jdbc.driver.T4CConnection.<init>(T4CConnection.java:202)<br/>oracle .jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:33)<br/>oracle.jdbc. driver.OracleDriver.connect(OracleDriver.java:474)<br/>com.bea.console.utils.jdbc.JDBCUtil s.testConnection(JDBCUtils.java:505)<br/>com.bea.console.actions.jdbc.datasources.createjd bcdatasource.CreateJDBCDataSource.testConnectionConfiguration(CreateJDBCDataSource.java:36 9)<br/>sun.reflect.GeneratedMethodAccessor826.invoke(Unknown Source)<br/>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl. java:25)<br/>java.lang.reflect.Method.invoke(Method.java:597)<br/>org.apache.beehive.netui .pageflow.FlowController.invokeActionMethod(FlowController.java:870)<br/>org.apache.beehiv e.netui.pageflow.FlowController.getActionMethodForward(FlowController.java:809)<br/>org.ap ache.beehive.netui.pageflow.FlowController.internalExecute(FlowController.java:478)<br/>or g.apache.beehive.netui.pageflow.PageFlowController.internalExecute(PageFlowController.java :306)<br/>org.apache.beehive.netui.pageflow.FlowController.execute(FlowController.java:336 )<br/>...

    i am finding this error when i click on Test Configuration button to test the connection wth oracle DB

  • Problem converting captured SQL Server 2000 database to Oracle

    Java(TM) Platform     1.6.0_17
    Oracle IDE     2.1.1.64.45
    Versioning Support     2.1.1.64.45
    After creating MWREP user, granting it privileges and creating the migration repository, I have captured a small SQL Server 2000 database (1 table), but after performing "Convert to Oracle", I get the message that it has transformed correctly, but no converted model is shown. I was originally using the JRE supplied with SQL Developer 2.1.1, but replaced that with 1.6 U17 after researching the problem here.
    The only message in the Migration Log is as follows:
    Catalog RAC, Schema dbo coalesced to single schema dbo_RAC
    The following message appears in the console:
    SELECT DISTINCT(REF_ID_FK), REF_TYPE FROM MD_ADDITIONAL_PROPERTIES WHERE PROP_KEY IN (?) AND CONNECTION_ID_FK = ?
    I have tried this with a more complex database, with no luck.
    Any thoughts?

    I did an offline capture.
    I used Tools->Migration->Third Party Database Offline Capture->Create Database Capture Scripts to generate scripts OMWB_OFFLINE_CAPTURE.BAT, SS2K_BCP_SCRIPT.BAT and sqlserver2000.ocp. Then the SQL Server DBA ran OMWB_OFFLINE_CAPTURE.BAT and sent me the output.
    I used Tools->Migration->Third Party Database Offline Capture->Load Database Capture Scripts Output to capture the model into the repository successfully.

  • Oracle distributed doc capture - database error

    hi all,
    i work at project ( capture , archive ) and i implement the ODDC ( instalation and integration )
    i do all configure include scanning profiles ( the last step )
    then when i do test the system i find that there is no data saved upon scanning and after commit else data in the commit table i build and mapping with the index fields and ECAudit table , but no data in tables like EcBatches and more,
    i see more than one column with type blob ( i mean column in capture system core tables like ECBatches)in many tables ( these tables not contain any data ! )
    i use the evaluation period license
    oracle 10g release 2
    win server 2003 stander edition R2 sp2
    any Idea please ??!!!!!

    its started !!!, but is there any blob field in the capture system usual database ??
    i mean the system database that the capture build it ??? is there any blob field ..... any idea please ??
    what about the check integrity utility is it stable ? what can do for me ?
    i tired with this problem ... any one can help me please ?
    is there any data must add to the database before the commit operation ??
    i dont have any data before and not all tables collect data just the ECAUDIT and my commit tables where the index fields saved i name it ( dbcommit ) with this tables
    database info
    user: capture
    password: capture
    database: capture
    commit table:
    DBCOMMIT
    column name: DataType size description
    PERMIT_ID integer
    DOC_TYPE varchar2 500 ( from 1 to 14 from pick list)
    AREA_ID integer default = 09
    SCAN_DATE date
    USER_ID integer
    DOC_PATH varchar2 500 document path on capture server
    INDEX_DATE Date
    when i do select statment like :
    select * from ECBATCHES;
    i get the error SP2-0678
    its mean :
    The "SP2" error messages are messages issued by SQL*Plus. The SP2-0678 error message text is "Column or attribute type can not be displayed by SQL*Plus." What you probably tried to do is to dump a binary data type, i.e. BLOB, to the screen in SQL*Plus. The problem is that SQL*Plus can only handle text data. It cannot handle binary data. Therefore, you are getting an error message telling you that this data type can not be displayed in your SQL*Plus session. Try your SELECT statement again but eliminate the BLOB column(s) from your query.
    the question is why there is some column type " BLOB " in the table like " ECBATCHES " and the other column is not blob its varchar2 and integer .... and these column empty !!! there is error somewhere but i tired re install it in new machine and database but nothing change !
    any IDEA please :(
    Edited by: yos on Nov 21, 2008 3:56 AM

  • Problem in OWB map for source change capture

    The problem is as follows:-
    Our OWB map between two different databases, captures change in data at source and populates them in target. It is a one to one map. The map is failing to process some rows from source database though it gives no:of errors as 0 in the audit view thus we infer that the map is running successfully.
    In our case, we have a composite key ( Instance_id, Version_number) at source as well as target. One instance_id can has more than one version number. We have observed that the view is picking up all the version_number for a particular instance_id but the map is unable to populate the last version_number to the target table.
    But when this map was executed in another environment and it is successfully populating all the data in the target.
    The brief description of the map is as follows:-
    The map has a view which captures all the changes in data (insert and update) .From this view we filter the rows which were already processed in previous map run (with help of watermark). Then we sort the rows based on DML type and watermark . These rows are then mapped to a target table. The loading type of the map is INSERT/UPDATE.

    Hi Nagesh,
    When we are doing I18N for that Application. Go to Navigator Tab
    Here Select Particular Projects--à src-àPackages-àsap-àvijay
    To internationalize the Web Dynpro application, copy the automatically generated *.xlf files and save them under a new name in the same directory.
    The new name must meet the following convention:
    u2022 .xlf
    For example, if you are creating *.xlf files for German,
    Use the language key de.
    Here Click on OK
    After that we can edit and translate these new *.xlf files in the S2X Editor.
    ApplyTemFirstView.wdview_de.xlf-àClick on Resource Text Tab
    Select Particular Text and Change the language to German-àClick on EDIT Button
    Here Enter German Lang-àClick on OK
    Now Go to Web Dynpro Explorer Tab
    Select Project-àRC Click on Rebuild Project
    Select Project-àRC Click on Reload
    Then Deploye the Application
    How to Check in Explorer. It is converting to German Language or not
    Open Internet Explorer
    Tool-àInternet Options -àClick on Languages Button-àClick on ADD Button
    Select German Language--àClick on OK
    Now We will Check in Portal
    Created Web Dynpro iView in Portal. That iView assign to the particular WorkSet-àAssign to Role
    Select that user Can Change Language
    Regards
    Vijay Kalluri

  • Error creating Streams example - ORA-01729: database link name expected

    Running 11gR1. Example from doc, "4 Single-Database Capture and Apply Example"
    the database name is "11G3". when I run the Step 3 statement below, I get the error below. What am I doing wrong? Thanks!
    DECLARE
    iscn NUMBER; -- Variable to hold instantiation SCN value
    BEGIN
    iscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();
    DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN(
    source_object_name => 'hr.employees',
    source_database_name => '11G3',
    instantiation_scn => iscn);
    END;
    Error at line 1
    ORA-01729: database link name expected
    ORA-06512: at "SYS.DBMS_LOGREP_UTIL", line 742
    ORA-06512: at "SYS.DBMS_APPLY_ADM", line 726
    ORA-06512: at line 5

    The manual stated to be GLOBAL_NAMES compliant which means DBLINK=target DB_NAME and SID cannot start with a figure. I strongly suggest to pay a careful look to manuals as Streams has a lot of others tricks under the hat awaiting for you.

  • Howto capture/replay workload (for performance testing purposes)

    Hi,
    We have a customer that will buy new hardware for his Oracle database server. Because he is hesitating between 2 possible storage solutions and is not convinced that solution A will be significantly better than solution B he wants a proof of concept.
    So this is what we will do:
    - Set up a test environment with hardware solution A and another one with hardware solution B.
    - We will backup and restore his database on both test servers.
    - We will run a workload on both servers and monitor performance with AWR and ADDM
    - Compare performance
    I would like to:
    - Make a consistent backup of the production database
    - Capture 24 hours work on the customers production database
    - Restore database on both test servers.
    - Replay the captured workload on both test servers.
    - Compare performance
    Does anyone know what tools I can use to do the capture/replay part?
    All suggestions are appreciated.
    Thank You,
    Pieter

    I have been playing with logminer and auditing but these don't solve the problem for 100%...
    Start logminer:
    EXECUTE DBMS_LOGMNR.START_LOGMNR(-
    STARTSCN => 404809, -
    ENDSCN   => 404975, -
    OPTIONS  => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
    DBMS_LOGMNR.CONTINUOUS_MINE + -
    DBMS_LOGMNR.COMMITTED_DATA_ONLY + -
    DBMS_LOGMNR.NO_ROWID_IN_STMT);Get SQL:
    SELECT SQL_REDO FROM V$LOGMNR_CONTENTS WHERE USERNAME != 'SYS'
    AND SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM', 'SYSMAN');This works for insert/update/delete, but it can't capture selects...
    Auditing:
    AUDIT SELECT TABLE, INSERT TABLE, DELETE TABLE, EXECUTE PROCEDURE BY ACCESS;
    SELECT SQL_TEXT FROM DBA_AUDIT_TRAIL;These sql_text statements show the bind variables
    eg: SELECT TO_NUMBER(PARAMETER_VALUE) FROM MGMT_PARAMETERS WHERE PARAMETER_NAME = :B1
    This is not executable, I need an executable result...
    Does anyone has a better way to accomplish what I need?
    Thank You.

  • Register in integrated capture in GoldenGate 12c

    register extract <name process > database
    register in integrated capture in GoldenGate 12c.
    detail workflow in register?
    Thanks,
    T N

    Hi T N,
    Integrated Capture is one of the New features of Oracle GoldenGate  11g. In Integrated Capture mode, the Extract process does not directly reads the Redo Logs. Instead, it directly interacts with the database log mining server.
    The log mining server reads the database redo logfiles and captures the changes in the form of Logical Change Records. These records are then written to the Oracle GoldenGate trail files by the Extract Process.
    Earlier in Classic Capture Mode, the Extract directly reads the Redo Logfiles, captures the changes and writes it to the GoldenGate Trail Files. But here it is totally different. The Oracle has binded the GoldenGate process with the Oracle Log Mining Server to capture the changes and hence it is has been named as Integrated Capture.
    To Integrate the Extract with the Database Log miner server, we need to first Register the extract with the database.
    REGISTER EXTRACT <EXTRACT_NAME> DATABASE
    This registers the Extract with the database to make the Extract process integrate with the Oracle Database Log miner server. Once you Register your Primary Extract to the database, automatically dedicated Log Miner Server will be assigned to that Primary Extract Group. So that Log Miner Server will capture the changes from the Redo Logs and gives it to the Extract process which in turn is written to the GoldenGate Trail files.
    Only Primary Extract can be registered with the Database. I mean if you are using a Datapump extract, you cannot register it to the Database.
    Regards,
    Veera

  • Low database cache hit ratio (85%)

    Hi Guys,
    I understand that high db cache hit ratio doesn't indicates that the database is healthy.
    The database might be doing additional "physical" reads due to un-tuned SQL.
    However, can explain why a low cache hit ratio might not indicate the db is unhealthy, as in the db need additional memory allocated?
    What i can think of is probably:
    1. the database might query different data most of the time. As such the data is not read again from cache before it aged out. Even if i add additional memory, the data might not be read again (from memory).
    2. ?
    3. ?
    I'm quite reluctant to list out the databases with below 90% hit ratio as part of the monthly report to the management. To them, below 90% means unhealthy.
    If these ratios to be used in monthly report, there will be a long section to explain why these ratios cannot meet but there is no performance concern.
    As such will need your expert advise on this.
    thanks
    Edited by: Chewy on Mar 13, 2012 1:23 AM

    Nikolay Savvinov wrote:
    In isolation, ratios are useless, but trends in ratios can point to potential problem. If your BCHR is steadily degrading over time, this is something to worry about (you'll have to examine your application for scalability issues)
    I used to think that there was a case for trending through a ratio in the days when databases were small, simple and (by modern standards) not very busy. But I'm no longer sure it was even a good idea then. How much of a change do you need to see before you start worrying - and what time-granularity would you take as your baseline. When a ratio varies between 98% and 99% during daylight hours, how do you spot a very large problem that's only going make a change of 0.01% over the course of a couple of weeks ?
    I really don't think there's any good SIMPLE way of producing a management sound-bite for every database in the system; each database needs a personal touch, and the number of figures you need to supply on each is not going to be easy to grasp without some graphic assistance. A suggestion I have is simply to pick three "representative" queries from the application (one "small", one "medium" and one "large") and run them once every hour, capturing the plan_hash_value, elapsed time, disk-reads, buffer gets, and CPU for each. A daily graph - 4 lines each - of each query will give management the big picture of variation in response time; a longer term graph based on the daily average with (say) best and worst excluded will give a trend warning. Obvously each database (or even application within database) needs its own three queries, and there may be periods during the day when it is not relevant to worry about a particular query.
    (NB In the past I've run baseline queries from a pl/sql package called by dbms_job, or dbms_scheduler, and stored the resulting cost figures in the database - capturing all the session stats, wait event and time model information)
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    Author: <b><em>Oracle Core</em></b>

Maybe you are looking for