Slow delete on a table with one CLOB column

Hi,
I have a table which has one CLOB column and even if I delete one row from it, it takes approx. 16 seconds. Since UNDO isn't generated for CLOBs (at least not in the UNDO tablespace), I can't figure out why this is so? The CLOB has defined a RETENTION clause, so it depends upon UNDO_RETENTION which is set to 900. There wasn't any lock from another session present on this table.
The table currently contains only 6 rows but it used to be bigger in the past, so I thought that maybe a full table scan is going on when deleting. But even if I limit the DELETE statement with an ROWID (to avoid a FTS), it doesn't help:
SQL> select * from v$version;
BANNER
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
PL/SQL Release 11.1.0.6.0 - Production
CORE    11.1.0.6.0      Production
TNS for 32-bit Windows: Version 11.1.0.6.0 - Production
NLSRTL Version 11.1.0.6.0 - Production
SQL> select count(*) from scott.packet;
  COUNT(*)
         6
SQL> column segment_name format a30
SQL> select segment_name
  2    from dba_lobs
  3  where owner = 'SCOTT'
  4     and table_name = 'PACKET';
SEGMENT_NAME
SYS_LOB0000081487C00002$$
SQL>  select segment_name, bytes/1024/1024 MB
  2    from dba_segments
  3  where owner = 'SCOTT'
  4      and segment_name in ('PACKET', 'SYS_LOB0000081487C00002$$');
SEGMENT_NAME                           MB
PACKET                               ,4375
SYS_LOB0000081487C00002$$             576
SQL> -- packet_xml is the CLOB column
SQL> select sum(dbms_lob.getlength (packet_xml))/1024/1024 MB from scott.packet;
        MB
19,8279037
SQL> column rowid new_value rid
SQL> select rowid from scott.packet where rownum=1;
ROWID
AAAT5PAAEAAEEDHAAN
SQL> set timing on
SQL> delete from scott.packet where rowid = '&rid';
old   1: delete from scott.packet where rowid = '&rid'
new   1: delete from scott.packet where rowid = 'AAAT5PAAEAAEEDHAAN'
1 row deleted.
Elapsed: 00:00:15.64From another session I monitored v$session.event for the session performing the DELETE and the reported wait event was 'db file scattered read'.
Someone asked Jonathan Lewis a similar looking question (under comment #5) here: http://jonathanlewis.wordpress.com/2007/05/11/lob-sizing/ but unfortunately I couldn't find if he wrote any answer/note about that.
So if anyone has any suggestion, I'll appreciate it very much.
Regards,
Jure

After reviewing the tkprof as suggested by user503699, I noticed that the DELETE itself is instantaneous. The problem is another statement:
select /*+ all_rows */ count(1)
from
"SCOTT"."MESSAGES" where "PACKET_ID" = :1
call     count       cpu    elapsed       disk      query    current        rows
Parse        1      0.00       0.00          0          0          2           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      1.40      16.93     125012     125128          0           1
total        3      1.40      16.93     125012     125128          2           1
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: ALL_ROWS
Parsing user id: SYS   (recursive depth: 1)
Rows     Row Source Operation
      1  SORT AGGREGATE (cr=125128 pr=125012 pw=125012 time=0 us)
      0   TABLE ACCESS FULL MESSAGES (cr=125128 pr=125012 pw=125012 time=0 us cost=32900 size=23056 card=5764)I checked if there was any "ON DELETE" trigger and since there wasn't, I suspected this might be a problem of unindexed foreign keys. As soon as I created an index on SCOTT.MESSAGES.PACKET_ID the DELETE executed immediately. The "funny" thing is that, the table SCOTT.MESSAGES is empty, but it has allocated 984MB of extents (since it wasn't truncated), so a time consuming full tablescan was occurring on it.
Thanks for pointing me to the 10046 trace which solved the problem.
Regards,
Jure

Similar Messages

  • How to show two seperate pivot tables with one select column

    Hi All
    My client wishes to have two pivot tables, one showing positive results and the other showing negative results.
    For Example:
    DIMENSION
    BUSINESS A          1000
    BUSINESS B          500
    BUSINESS C          100
    DIMENSION
    BUSINESS A          -1000
    BUSINESS B          -500
    BUSINESS C          -100
    Is it possible to then select the different DIMENSION with one select column for both?
    Thanks

    Not sure I got it right try this
    for Number column pull twice and set col*-1
    use 2 pivot table for each number type
    cool as ~ http://cool-bi.com

  • Error when trying to use (GUI Wizards) on Table with BLOB, CLOB columns.

    I enjoyed watching the demo of ODP.NET/VS 2005 Environment where you can just drag the new command builders, data sets, etc.. but when I try this with a table that contains a BLOB and CLOB column it returns an error and basically cannot build the SQL needed to process these field types.
    I would assume it could do this natively since it is somewhat of a primitive type. I also tried to point it to a Stored Proc that returned a blob and it caused an error when building the SQL.
    So it appears it is not possible to use the GUI data access wizards for these column types. I can use them in code with the ODP.NET with no problems but it would be nice to be able to utilize the GUI Command Builders, etc...

    blob and clob are not supported i guess. oracle is a bit sluggish when it comes to VS and MS products!

  • Convert a table with one column to panelList with outputText

    Hi,
    I have a table with one column, I would like to change it to use panelList to present it instead. What will be the syntax for panelList?
    <af:table value="#{bindings.ItasUiRuleParamsVO2.collectionModel}"
    var="row"
    rows="#{bindings.ItasUiRuleParamsVO2.rangeSize}"
    emptyText="#{bindings.ItasUiRuleParamsVO2.viewable ? 'No data to display.' : 'Access Denied.'}"
    fetchSize="#{bindings.ItasUiRuleParamsVO2.rangeSize}"
    rowBandingInterval="0"
    selectedRowKeys="#{bindings.ItasUiRuleParamsVO2.collectionModel.selectedRow}"
    selectionListener="#{bindings.ItasUiRuleParamsVO2.collectionModel.makeCurrent}"
    rowSelection="single" id="t2"
    partialTriggers="::t1">
    <af:column sortProperty="RuleName" sortable="true"
    headerText="#{bindings.ItasUiRuleParamsVO2.hints.RuleName.label}"
    id="c11">
    <af:outputText value="#{row.RuleName}"
    id="ot11"/>
    </af:column>
    </af:table>
    I tried this:
    <af:panelList id="pl1">
    <af:forEach items="#{bindings.ItasUiRuleParamsVO2.collectionModel}">
    <af:outputText value="#{item.RuleName}" id="ot14"/>
    </af:forEach>
    </af:panelList>
    but the error say:
    javax.servlet.jsp.JspException: "items" must point to a List or array
         at org.apache.myfaces.trinidadinternal.taglib.ForEachTag.doStartTag(ForEachTag.java:136)
    Any ideas?
    Thanks
    -Mina

    <af:forEach items="#{bindings.ItasUiRuleParamsVO2.collectionModel}" var="row">
    <af:outputText value="#{row.RuleName}" />
    </af:forEach>
    and make sure the table binding is still in place in your pagedef (or binding tab)

  • Delete from two tables in one statement

    Hi,
    Is there a way to delete from two tables in one statement?
    Actually I have two tables:
    1. Base table (id, name, age)
    2. Person table (id, city, street)
    The id in both tables is identical.
    I would like to delete using something like a join:
    Delete from base, person where id=2;
    Thanks
    dyahav

    Hi,
    If you want to delete records both at a time them your table must use ON DELETE CASCADE. See the below example.
    CREATE TABLE supplier
    ( supplier_id numeric(10) not null,
    supplier_name varchar2(50) not null,
    contact_name varchar2(50),
    CONSTRAINT supplier_pk PRIMARY KEY (supplier_id)
    CREATE TABLE products
    ( product_id numeric(10) not null,
    supplier_id numeric(10) not null,
    CONSTRAINT fk_supplier
    FOREIGN KEY (supplier_id)
    REFERENCES supplier(supplier_id)
    ON DELETE CASCADE
    In this example, we've created a primary key on the supplier table called supplier_pk. It consists of only one field - the supplier_id field. Then we've created a foreign key called fk_supplier on the products table that references the supplier table based on the supplier_id field.
    Because of the cascade delete, when a record in the supplier table is deleted, all records in the products table will also be deleted that have the same supplier_id value.
    Thank you.

  • Deadlock when updating different rows on a single table with one clustered index

    Deadlock when updating different rows on a single table with one clustered index. Can anyone explain why?
    <event name="xml_deadlock_report" package="sqlserver" timestamp="2014-07-30T06:12:17.839Z">
      <data name="xml_report">
        <value>
          <deadlock>
            <victim-list>
              <victimProcess id="process1209f498" />
            </victim-list>
            <process-list>
              <process id="process1209f498" taskpriority="0" logused="1260" waitresource="KEY: 8:72057654588604416 (8ceb12026762)" waittime="1396" ownerId="1145783115" transactionname="implicit_transaction"
    lasttranstarted="2014-07-30T02:12:16.430" XDES="0x3a2daa538" lockMode="X" schedulerid="46" kpid="7868" status="suspended" spid="262" sbid="0" ecid="0" priority="0"
    trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
    hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783115" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
               <inputbuf>
    UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales1', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'ABCD' AND cam_window= 2   </inputbuf>
              </process>
              <process id="process9cba188" taskpriority="0" logused="2084" waitresource="KEY: 8:72057654588604416 (2280b457674a)" waittime="1397" ownerId="1145783104" transactionname="implicit_transaction"
    lasttranstarted="2014-07-30T02:12:16.427" XDES="0x909616d28" lockMode="X" schedulerid="23" kpid="8704" status="suspended" spid="155" sbid="0" ecid="0" priority="0"
    trancount="2" lastbatchstarted="2014-07-30T02:12:16.440" lastbatchcompleted="2014-07-30T02:12:16.437" lastattention="1900-01-01T00:00:00.437" clientapp="Internet Information Services" hostname="CHTWEB-CH2-11P"
    hostpid="12776" loginname="chatuser" isolationlevel="read uncommitted (1)" xactid="1145783104" currentdb="8" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128058">
                <inputbuf>
    UPDATE analyst_monitor SET cam_status = N'4', cam_event_data = N'sales2', cam_event_time = current_timestamp , cam_modified_time = current_timestamp , cam_room = '' WHERE cam_analyst_name=N'12345' AND cam_window= 1   </inputbuf>
              </process>
            </process-list>
            <resource-list>
              <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
                <owner-list>
                  <owner id="process9cba188" mode="X" />
                </owner-list>
                <waiter-list>
                  <waiter id="process1209f498" mode="X" requestType="wait" />
                </waiter-list>
              </keylock>
              <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor" indexname="IX_Clust_scam_an_name_window" id="lock18ee1ab00" mode="X" associatedObjectId="72057654588604416">
                <owner-list>
                  <owner id="process1209f498" mode="X" />
                </owner-list>
                <waiter-list>
                  <waiter id="process9cba188" mode="X" requestType="wait" />
                </waiter-list>
              </keylock>
            </resource-list>
          </deadlock>
        </value>
      </data>
    </event>

    To be honest, I don't think the transaction is necessary, but the developers put it there anyway. The select statement will put the result cam_status
    into a variable, and then depends on its value, it will decide whether to execute the second update statement or not. I still can't upload the screen-shot, because it says it needs to verify my account at first. No clue at all. But it is very simple, just
    like:
    Clustered Index Update
    [analyst_monitor].[IX_Clust_scam_an_name_window]
    cost: 100%
    By the way, for some reason, I can't find the object based on the associatedObjectId listed in the XML
    <keylock hobtid="72057654588604416" dbid="8" objectname="CHAT.dbo.analyst_monitor"
    indexname="IX_Clust_scam_an_name_window" id="lock4befe1100" mode="X" associatedObjectId="72057654588604416">
    For example: 
    SELECT * FROM sys.partition WHERE hobt_id = 72057654588604416
    This return nothing. Not sure why.

  • How do you delete records from table with data in a select option

    how do you delete records from table with relevant to data in a select option..how to write coding

    Hi,
    Try
    if not s_select_option [ ] is initial.
    delete * from table
    where field in s_select_option.
    endif.
    commit work.
    Be careful though. If select option is emty, you will delete the entire table.
    Regards,
    Arek

  • How can I treat many tables with one handler(?) ?

    Hello~
    I am applying BDB to my embbeded system not rich in resource.
    Some *.db files are called frequently.
    But, Opening a *.db file [db_create(&dbp, NULL, 0) AND dbp->open] takes a long time in BDB
    So, I loaded this functions onto a booting module to call opening functions just one time.
    And all D/B handlers(?) are loaded in all run time
    But, a D/B handler takes about 360Kbytes. And there are too many *.db files(10) where a table is
    How can I treat many tables with one handler(?) ?
    Or
    If you have the most efficient way to call openning functions just one time, please tell me
    Thank you

    Hello,
    Opening the database handles is expensive due to
    opening a file on disk. Is it possible for the application
    to use in-memory dbs? Otherwise is there a way for the application
    to cache the DB handles and reduce the overhead associated with
    opening and closing them?
    Thank you,
    Sandra

  • How to create editable table with one empty row ?

    I'm looking for solution how to create editable table with one empty row using ADF BC. I have seen this solution in application that was created in JHeadstart and it's very well idea to use it insead of creation form.

    hammm, i do it this:
    drop the VO on the page, select Table->ADF Table....
    so, drop the botton create, from de VO->operations->create (the firts), and right botton (mouse) Edit binding....
    in Data collection select the VO, in Select an action select CreateInsert
    luck

  • Importing a table with a BLOB column is taking too long

    I am importing a user schema from 9i (9.2.0.6) database to 10g (10.2.1.0) database. One of the large tables (millions of records) with a BLOB column is taking too long to import (more that 24 hours). I have tried all the tricks I know to speed up the import. Here are some of the setting:
    1 - set buffer to 500 Mb
    2 - pre-created the table and turned off logging
    3 - set indexes=N
    4 - set constraints=N
    5 - I have 10 online redo logs with 200 MB each
    6 - Even turned off logging at the database level with disablelogging = true
    It is still taking too long loading the table with the BLOB column. The BLOB field contains PDF files.
    For your info:
    Computer: Sun v490 with 16 CPUs, solaris 10
    memory: 10 Gigabytes
    SGA: 4 Gigabytes

    Legatti,
    I have feedback=10000. However by monitoring the import, I know that its loading average of 130 records per minute. Which is very slow considering that the table contains close to two millions records.
    Thanks for your reply.

  • What index is suitable for a table with no unique columns and no primary key

    alpha
    beta 
    gamma
    col1
    col2
    col3
    100
    1
    -1
    a
    b
    c
    100
    1
    -2
    d
    e
    f
    101
    1
    -2
    t
    t
    y
    102
    2
    1
    j
    k
    l
    Sample data above  and below is the dataype for each one of them
    alpha datatype- string 
    beta datatype-integer
    gamma datatype-integer
    col1,col2,col3 are all string datatypes. 
    Note:columns are not unique and we would be using alpha,beta,gamma to uniquely identify a record .Now as you see my sample data this is in a table which doesnt have index .I would like to have a index created covering these columns (alpha,beta,gamma) .I
    beleive that creating clustered index having covering columns will be better.
    What would you recommend the index type should be here in this case.Say data volume is 1 milion records and we always use the alpha,beta,gamma columns when we filiter or query records 
    what index is suitable for a table with no unique columns and primary key?
    col1
    col2
    col3
    Mudassar

    Many thanks for your explanation .
    When I tried querying using the below query on my heap table the sql server suggested to create NON CLUSTERED INDEX INCLUDING columns    ,[beta],[gamma] ,[col1] 
     ,[col2]     ,[col3]
    SELECT [alpha]
          ,[beta]
          ,[gamma]
          ,[col1]
          ,[col2]
          ,[col3]
      FROM [TEST].[dbo].[Test]
    where   [alpha]='10100'
    My question is why it didn't suggest Clustered INDEX and chose NON clustered index ?
    Mudassar

  • Cartesian of data from two tables with no matching columns

    Hello,
    I was wondering – what’s the best way to create a Cartesian of data from two tables with no matching columns in such a way, so that there will be only a single SQL query generated?
    I am thinking about something like:
    for $COUNTRY in ns0: COUNTRY ()
    for $PROD in ns1:PROD()
    return <Results>
         <COUNTRY> {fn:data($COUNTRY/COUNTRY_NAME)} </COUNTRY>
         <PROD> {fn:data($PROD/PROD_NAME)} </PROD>
    </Results>
    And the expected result is combination of all COUNTRY_NAMEs with all PROD_NAMEs.
    What I’ve noticed when checking query plan is that DSP will execute two queries to have the results – one for COUNTRY_NAME and another one for PROD_NAME. Which in general results in not the best performance ;-)
    What I’ve noticed also is that when I add something like:
    where COUNTRY_NAME != PROD_NAME
    everything is ok and there is only one query created (it's red in the Query plan, but still it's ok from my pov). Still it looks to me more like a workaround, not a real best approach. I may be wrong though...
    So the question is – what’s the suggested approach for such queries?
    Thanks,
    Leszek
    Edited by xnts at 11/19/2007 10:54 AM

    Which in general results in not the best performanceI disagree. Only for two tables with very few rows, would a single sql statement give better performance.
    Suppose there are 10,000 rows in each table - the cross-product will result in 100 million rows. Sounds like a bad idea. For this reason, DSP will not push a cross-product to a database. It will get the rows from each table in separate sql statements (retrieving only 20,000 rows) and then produce the cross-product itself.
    If you want to execute sql with cross-products, you can create a sql-statement based dataservice. I recommend against doing so.

  • How to create table with rows and columns in the layout mode?

    One of my friends advised me to develop my whole site on the
    layout mode as its better than the standard as he says
    but I couldnot make an ordinary table with rows and columns
    in th layout mode
    is there any one who can tell me how to?
    thanx alot

    Your friend is obviously not a reliable source of HTML
    information.
    Murray --- ICQ 71997575
    Adobe Community Expert
    (If you *MUST* email me, don't LAUGH when you do so!)
    ==================
    http://www.dreamweavermx-templates.com
    - Template Triage!
    http://www.projectseven.com/go
    - DW FAQs, Tutorials & Resources
    http://www.dwfaq.com - DW FAQs,
    Tutorials & Resources
    http://www.macromedia.com/support/search/
    - Macromedia (MM) Technotes
    ==================
    "Mr.Ghost" <[email protected]> wrote in
    message
    news:f060vi$npp$[email protected]..
    > One of my friends advised me to develop my whole site on
    the layout mode
    > as its
    > better than the standard as he says
    > but I couldnot make an ordinary table with rows and
    columns in th layout
    > mode
    > is there any one who can tell me how to?
    > thanx alot
    >

  • Difference between an XMLType table and a table with an XMLType column?

    Hi all,
    Still trying to get my mind around all this XML stuff.
    Can someone concisely explain the difference between:
    create table this_is_xmltype_tab of xmltype;and
    create table this_is_tab_w_xmltpe_col(id number, document xmltype);What are the relative advantages and disadvantages of each approach? How do they really differ?
    Thanks,
    -Mark

    There is another pointer Mark, that I realized when I was thinking about the differences...
    If you would look up in the manual regarding "xdb:annotations" you would learn about a method using an XML Schema to generate out of the box your whole design in terms of physical layout and/or design principles. In my mind this should be the preferred solution if you are dealing with very complex XML Schema environments. Taking your XML Schema as your single point design layout, that during the actual implementation automatically generates and builds all your needed database objects and its physical requirements, has great advantages in points of design version management etc., but...
    ...it will create automatically an XMLType table (based on OR, Binary XML of "hybrid" storage principles, aka the ones that are XML Schema driven) and not AFAIK a XMLtype column structure: so as in "our" case a table with a id column and a xmltype column.
    In principle you could relationally relate to this as:
    +"I have created an EER diagram and a Physical diagram, I mix the content/info of those two into one diagram." "Then I _+execute+_ it in the database and the end result will be an database user/schema that has all the xxxx amount of physical objects I need, the way I want it to be...".+
    ...but it will be in the form of an XMLType table structure...
    xdb:annotations can be used to create things like:
    - enforce database/company naming conventions
    - DOM validation enabled or not
    - automatic IOT or BTree index creation (for instance in OR XMLType storage)
    - sort search order enforced or not
    - default tablenames and owners
    - extra column or table property settings like for partitioning XML data
    - database encoding/mapping used for SQL and binary storage
    - avoid automatic creation of Oracle objects (tables/types/etc), for instance, via xdb:defaultTable="" annotations
    - etc...
    See here for more info: http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#ADXDB4519
    and / or for more detailed info:
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030452
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#i1030995
    http://download.oracle.com/docs/cd/E11882_01/appdev.112/e10492/xdb05sto.htm#CHDCEBAG
    ...

  • Tabular Form based on table with lots of columns - how to avoid scrollbar?

    Hi everybody,
    I'm an old Forms and VERY new APEX user. My problem is the following: I have to migrate a form application to APEX.
    The form is based on a table with lots of columns. In Forms you can spread the data over different tab pages.
    How can I realize s.th similar in APEX? I definitely don't want to use a horizontal scroll bar...
    Thanks in advance
    Hilke

    If the primary key is created by the user themselves (which is not recommended, you should have another ID which the user sees which would be the varchar2 and keep the primary key as is, the user really shouldn't ever edit the primary key) then all you need to do is make sure that the table is not populated with a primary key in the wizard and then make sure that you cannot insert a null into your varchar primary key text field.
    IF you're doing it this way I would make a validation on the page which would run off a SQL Exists validation, something along the lines of
    SELECT <primary key column>
    FROM <your table>
    WHERE upper(<primary key column>) = upper(<text field containing user input>);
    and if it already exists, fire the validation claiming that it already exists and to come up with a new primary key.
    Like I said if you really should have a primary key which the database refers to each individual record itself and then have an almost pseudo-primary key that the user can use. For example in the table it would look like this:
    TABLE1
    table_id (this is the primary key which you should NOT change)
    user_table_id (this is the pretend primary key which the user can change)
    other_columns
    etc
    etc
    hope this helps in some way

Maybe you are looking for

  • ASA 5505, error in Access Rule

    Hello. Tha ASA 5505 is working, but I try to allow http and https from internet to a server running 2012 Essentials. The server has the internal IP 192.168.0.100. I have created an Object called SERVER with IP 192.168.0.100 The outside Interface is c

  • Mini DisplayPort to Dual Link DVI Adapter in Boot Camp

    Alright, this used to work fine on my 2008 17" MBP with a 30" Apple Cinema Display. I need to use Windows at the office for my programming work. I used to run Dual Display, but now Windows will show the monitor is connected and won't do DualView. I c

  • Problems with PO creation in SRM in Plan Driven Scenario

    Hi experts, I have a problem with creation of PO from SC in SRM 5.0 (Extended Classic) in Plan Driven Scenario: We transfer PM order from FI/LO (ECC 5.0) to SRM (the Extended Classic is activated). SC (External req) is being approved After approval t

  • Using external hard drives for Time Machine backups

    I would like to let everyone know about an important issue regarding what type of media to use, or better said not to use for Time Machine backups. I tried to use a RAID 0 array disks ( FIY: RAID is a storage technology that combines multiple disk dr

  • Certificate for a web service

    Hi, Our company has a SAP Netweaver Enterprise Portal implementation. Recently we have been asked to create a Web service from a Java class in SAP Netweaver using Netweaver Developer Studio. The connection to Web service must be an https connection.