Distributed Cache : Performance issue; takes long to get data

Hi there,
     I have set up a cluster on one a Linux machine with 11 nodes (Min & Max Heap Memory = 1GB). The nodes are connected through a multicast address / port number. I have configured Distributed Cache service running on all the nodes and 2 nodes with ExtendTCPService. I loaded a dataset of size 13 millions into the cache (approximately 5GB), where the key is String and value is Integer.
     I run a java process from another Linux machine on the same network, that makes use of the this cache. The process fetches around 200,000 items from the cache and it takes around 180 seconds ONLY to fetch the data from the cache.
     I had a look at the Performance Tuning > Coherence Network Tuning and checked the Publisher and Receiver Success rate and both were neardly 0.998 on all the nodes.
     It a bit hard to believe that it takes so long. May be I'm missing something. Would appreciate if you could advice me on the same?
     More info :
          a) All nodes are running on Java 5 update 7
          b) The java process is running on JDK1.4 Update 8
          c) -server option is enabled on all the nodes and the java process
          d) I'm using Tangosol Coherence 3.2.2b371
          d) cache-config.xml
                    <?xml version="1.0"?>
                    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
                    <cache-config>
                    <caching-scheme-mapping>
                    <cache-mapping>
                    <cache-name>dist-*</cache-name>
                    <scheme-name>dist-default</scheme-name>
                    </cache-mapping>
                    </caching-scheme-mapping>
                    <caching-schemes>
                    <distributed-scheme>
                    <scheme-name>dist-default</scheme-name>
                    <backing-map-scheme>
                         <local-scheme/>
                    </backing-map-scheme>
                    <lease-granularity>member</lease-granularity>
                    <autostart>true</autostart>
                    </distributed-scheme>
                    </caching-schemes>
                    </cache-config>
     Thanks,
     Amit Chhajed

Hi Amit,
     Is the java test process single threaded, i.e. you performed 200,000 consecutive cache.get() operations? If so then this would go a long ways towards explaining the results, as most of the time in all processes would be spent waiting on the network, and your results would come out to just over 1ms per operation. Please be sure to run with multiple test threads, and also it would be good to make use of the cache.getAll() call where possible to have a single thread fetching multiple items in parallel.
     Also you may need to do a some tuning on your cache server side. In general I would say that on a 1GB heap you should only utilize roughly 750 MB of that space for cache storage. Taking backups into consideration this means 375MB of data per JVM. So with 11 nodes, this would mean a cache capacity of 4GB. At 5GB of data each cache server will be running quite low on free memory, resulting in frequent GCs which will hurt performance. Based on my calculations you should use 14 cache servers to hold your 5GB of data. Be sure to run with -verbose:gc to monitor your GC activity.
     You must also watch your machine to make sure that your cache servers aren't getting swapped out. This means that your server machine needs to have enough RAM to keep all the cache servers in memory. Using "top" you will see that a 1GB JVM actually takes about 1.2 GB of RAM. Thus for 14 JVMs you would need ~17GB of RAM. Obviously you need to leave some RAM for the OS, and other standard processes as well, so I would say this box would need around 18GB RAM. You can use "top" and "vmstat" to verify that you are not making active use of swap space. Obviously the easiest thing to do if you don't have enough RAM, would be to split your cache servers out onto two machines.
     See http://wiki.tangosol.com/display/COH32UG/Evaluating+Performance+and+Scalability for more information on things to consider when performance testing Coherence.
     thanks,
     Mark

Similar Messages

  • Distributed cache performance?

    Hi,
    I have a question about the performance of a cluster using a distributed cache:
    A distributed cache is available in the the cluster, using the expiry-delay functionality. Each node first inserts new entries in the cache and then periodically updates the entries as long as the entry is needed in the cluster (entries that are no longer periodically updated will be removed due to the expiry-delay).
    I performed a small test using a cluster with two nodes that each inserted ~2000 entries in the distributed cache. The nodes then periodically update their entries at 5 minutes intervals (using the Map.put(key, value) method). The nodes never access the same entries, so there will be no synchronization issues.
    The problem is that the CPU load on the machines running the nodes are very high, ~70% (and this is quite powerful machines with 4 CPUs running Linux). To be able to find the reason for the high CPU load, I used a profiler tool on the application running on one of the nodes. It showed that the application spent ~70% of the time in com.tangosol.coherence.component.net.socket.UdpSocket.receive. Is this normal?
    Since each node has a lot of other things to do, it is not acceptable that 70% of the CPU is used only for this purpose. Can this be a cache configuration issue, or do I have to find some other approach to perform this task?
    Regards
    Andreas

    Hi Andreas,
    Can you provide us with some additional information. You can e-mail it to our support account.
    - JProfiler snapshot of the profiling showing high CPU utilization
    - multiple full thread dumps for the process taken a few seconds apart, these should be taken when running outside of the profiler
    - Your override file (tangosol-coherence-override.xml)
    - Your cache configuration file (coherence-cache-config.xml)
    - logs from the high CPU event, please also include -verbose:gc in the logs, directing the output to the coherence log file
    - estimates on the sizes of the objects being updated in the cache
    As this is occurring even when you are not actively adding data to the cache, can you describe what else your application is doing at this time. It would be extremely odd for Coherence to consume any noticeable amount of CPU if you are not making heavy use of the cache.
    Note that when using the Map.put method the old value is returned to the caller, which for a distributed cache means extra network load, you may wish to consider switching to Map.putAll() as this does not need to return the old value, and is more efficient even if you are only operating on a single entry.
    thanks,
    Mark

  • Performance issue with FDM when importing data

    In the FDM Web console, a performance issue has been detected when importing data (.txt)
    In less than 10 seconds the ".txt" and the ".log" files are created the INBOX folder (the ".txt" file) and in the OUTBOX\Logs (the ".log" file).
    At that moment, system shows the message "Processing, please wait” during 10 minutes. Eventually the information is displayed, however if we want to see the second page, we have to wait more than 20 seconds.
    It seems a performance issue when system tries to show the imported data in the web page.
    It has been also noted that when a user tries to import a txt file directly clicking on the tab "Select File From Inbox", the user has to also wait other 10 minutes before the information is displayed on the web page.
    Thx in advance!
    Cheers
    Matteo

    Hi Matteo
    How much data is being imported / displayed when users are interacting with the system.
    There is a report that may help you to analyse this but unfortunately I cannot remember what it is called and don't have access to a system to check. I do remember that it breaks down the import process into stages showing how long it takes to process each mapping step and the overall time.
    I suspect that what you are seeing is normal behaviour but that isn't to say that performance improvements are not possible.
    The copying of files is the first part of the import process before FDM then starts the import so that will be quick. The processing is then the time taken to import the records, process the mapping and write to the tables. If users are clicking 'Select file from Inbox' then they are re-importing so it will take just as long as it would for you to import it, they are not just asking to retrieve previously imported data.
    Hope this helps
    Stuart

  • Takes Long time for Data Loading.

    Hi All,
    Good Morning.. I am new to SDN.
    Currently i am using the datasource 0CRM_SRV_PROCESS_H and it contains 225 fields. Currently i am using around 40 fields in my report.
    Can i hide the remaining fields in the datasource level itself (TCODE : RSA6)
    Currently data loading takes more time to load the data from PSA to ODS (ODS 1).
    And also right now i am pulling some data from another ODS(ODS 2)(LookUP). It takes long time to update the data in Active data table of the ODS.
    Can you please suggest how to improve the performance of dataloading on this Case.
    Thanks & Regards,
    Siva.

    Hi....
    Yes...u can hide..........just Check the hide box for those fields.......R u in BI 7.0 or BW...........whatever ........is the no of records is huge?
    If so u can split the records and execute............I mean use the same IP...........just execute it with different selections.........
    Check in ST04............is there are any locks or lockwaits..........if so...........Go to SM37 >> Check whether any Long running job is there or not.........then check whether that job is progressing or not............double click on the Job >> From the Job details copy the PID..............go to ST04 .....expand the node............and check whether u r able to find that PID there or not.........
    Also check System log in SM21............and shortdumps in ST04........
    Now to improve performance...........u can try to increase the virtual memory or servers.........if possiblr........it will increase the number of work process..........since if many jobs run at a time .then there will be no free Work prrocesses to proceed........
    Regards,
    Debjani......

  • ESB performance issue: takes too long to select and insert records in DBs

    Hi,
    I have an ESB service which has to select data from seven different tables(using join operations) of one database and insert it into a single table of another database.
    It takes unduly long time to do this operation.
    For ex: it takes over 2 hours to select and insert 3000 records.
    When ran the same query to select the records from the tables using SQL developer, it took only 23 seconds.
    Do I need to change any configuration settings in the enterprise manager or someother place to increase the performance. Someone please advice.
    I am using Oracle SOA Suite 10.1.3.4
    Thanks,
    RV

    Hi,
    I have an ESB service which has to select data from seven different tables(using join operations) of one database and insert it into a single table of another database.
    It takes unduly long time to do this operation.
    For ex: it takes over 2 hours to select and insert 3000 records.
    When ran the same query to select the records from the tables using SQL developer, it took only 23 seconds.
    Do I need to change any configuration settings in the enterprise manager or someother place to increase the performance. Someone please advice.
    I am using Oracle SOA Suite 10.1.3.4
    Thanks,
    RV

  • Cache performance issues

    I was able to add indexes to a cache. This cache holds objects which contain all the rows in one of our database tables.
    I run a sql query using hibernate on the database table, and then run the same query using filters on the cache. I am not noticing a significant performance gain. Is there something I'm doing wrong?
    Here's what I'm trying to do:
    QueryMap cache = (QueryMap)this.getPrimaryDAO().getCache();
    Filter filterStateEq = new EqualsFilter("getStateCode", state);
    Filter filterCompanyEq = new EqualsFilter("getCompanyCode", company);
    Filter filterCoverageEq = new EqualsFilter("getCovCode", coverage);
    Filter filterEffDateLE = new DateLessEqualsFilter("getEffectiveDate",
    effectiveDate);
    Filter filterExpDateGE = new DateGreaterEqualsFilter("getExpirationDate",
    effectiveDate);
    Filter filterAnd = new AllFilter(new Filter[]
    {filterStateEq, filterCompanyEq,
    filterLobEq, filterEffDateLE,
    filterExpDateGE});
    Set filteredSet = cache.keySet(filter);
    Basically I'm trying to simulate a sql query:
    select * from where state =
    and company =
    and covcode =
    and ......

    Hi Asim,
    The code looks good, and it is quite natural to expect performance numbers similar to the DB considering you are executing the query on a single thread. When you will see the difference is under load on multiple machines. Let’s say you have 10 nodes configured as cache servers (storage enabled). Each of theses nodes will own approximately 10% of the total data in the cache. When a client thread calls a distributed query, it will be executed on each cache server node in parallel against the partial dataset they own. This means that with Coherence the performance of your queries will scale near linearly. Providing similar scalability with a typical commercial DB is going to be incomparably more expensive.
    Best regards,
    Gary Hawks
    Tangosol

  • Dblink take too long to get data

    hi,
    i have 9.2.0.7.0 DB on WIN2000
    here is the problem:
    when i test the DBlinks on this db:
    select * from dual@DB_LINK_NAME;
    this statement take the whole day !!!!!
    i try another DBlink and the same ,
    observation:
    1-no error on the log file
    2-i dont know when this proble happen .(start)
    3-the cpu utilization is not 100%
    4-no table space is full or more than 80% full
    5-sga is:
    Total System Global Area 135339876 bytes
    Fixed Size 454500 bytes
    Variable Size 109051904 bytes
    Database Buffers 25165824 bytes
    Redo Buffers 667648 bytes
    can any one help or any oracle notes may help??

    Database links between 9.2 and 8.0.5 are unsupported, and 8.0.5 is unsupported. I would be very concerned about a critical billing application running on an Oracle release that was desupported 8 years ago and isn't even a terminal release that Oracle would have offered extended support on and is 9 major releases behind the current release.
    I suppose it's possible that the problem you're seeing is related solely to the fact that you have fundamentally incompatible releases trying to communicate, but normally that sort of thing generates different errors. So I would tend to put my money on Niall's suggestion that there is a firewall between the systems that is interfering with the communication (there are a number of ways this could be happening that would not interfere with a simple ping, so that doesn't tell us much).
    If the release compatibility is the problem, is there a third database somewhere running a release that was certified to connect to both 8.0.5 and 9.2 that you could use as a bridge? For example, if you have an 8.1.7 database somewhere, a database link from 9.2 used to be supported and a database link from 8.1.7 to 8.0.5 used to be supported. Of course, then you're making two hops across the network each time, which slows things down, but that would at least eliminate the client/ server incompatibility issues.
    Justin

  • Help WF: mail message takes long to get user's inbox

    Friends,
    I have a task that sends an e-mail message. My workflow is all right, and the message to business workplace is fast, but the user says that this mail message didn't get his mail box.
    I've already saw swel transaction but it's all right!
    thanks,
    Glauco

    Hi.
    Do you mean SAP mails or internet mails?
    If they are internet mails, maybe you must configure the job for sending this emails to run more times. It's the task on SCOT transaction.
    If you mean the SAP mails maybe you can try to send the mails with the "urgent" option.
    thanks and regards

  • CDP Performance Issue-- Taking more time fetch data

    Hi,
    I'm working on Stellent 7.5.1.
    For one of the portlet in portal its taking more time to fetch data. Please can some one help me to solve this issue.. So that performance can be improved.. Please kindly provide me solution.. This is my code for fetching data from the server....
    public void getManager(final HashMap binderMap)
    throws VistaInvalidInputException, VistaDataNotFoundException,
    DataException, ServiceException, VistaTemplateException
         String collectionID =
    getStringLocal(VistaFolderConstants.FOLDER_ID_KEY);
         long firstStartTime = System.currentTimeMillis();
    HashMap resultSetMap = null;
    String isNonRecursive = getStringLocal(VistaFolderConstants
    .ISNONRECURSIVE_KEY);
    if (isNonRecursive != null
    && isNonRecursive.equalsIgnoreCase(
    VistaContentFetchHelperConstants.STRING_TRUE))
    VistaLibraryContentFetchManager libraryContentFetchManager =
    new VistaLibraryContentFetchManager(
    binderMap);
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
              resultSetMap = libraryContentFetchManager
    .getFolderContentItems(m_workspace);
    * used to add the resultset to the binder.
    addResultSetToBinder(resultSetMap , true);
    else
         long startTime = System.currentTimeMillis();
    * isStandard is used to decide the call is for Standard or
    * Extended.
    SystemUtils.trace(
    VistaContentFetchHelperConstants.CONTENT_FETCH_TRACE,
    "The input Parameters for Content Fetch = "
    + binderMap);
    String isStandard = getTemplateInformation(binderMap);
    long endTimeTemplate = System.currentTimeMillis();
    binderMap.put(VistaFolderConstants.IS_STANDARD,
    isStandard);
    long endTimebinderMap = System.currentTimeMillis();
    VistaContentFetchManager contentFetchManager
    = new VistaContentFetchManager(binderMap);
    long endTimeFetchManager = System.currentTimeMillis();
    resultSetMap = contentFetchManager
    .getAllFolderContentItems(m_workspace);
    long endTimeresultSetMap = System.currentTimeMillis();
    * used to add the resultset and the total no of content items
    * to the binder.
    addResultSetToBinder(resultSetMap , false);
    long endTime = System.currentTimeMillis();
    if (perfLogEnable.equalsIgnoreCase("true"))
         Log.info("Time taken to execute " +
                   "getTemplateInformation=" +
                   (endTimeTemplate - startTime)+
                   "ms binderMap=" +
                   (endTimebinderMap - startTime)+
                   "ms contentFetchManager=" +
                   (endTimeFetchManager - startTime)+
                   "ms resultSetMap=" +
                   (endTimeresultSetMap - startTime) +
                   "ms getManager:getAllFolderContentItems = " +
                   (endTime - startTime) +
                   "ms overallTime=" +
                   (endTime - firstStartTime) +
                   "ms folderID =" +
                   collectionID);
    Edited by: 838623 on Feb 22, 2011 1:43 AM

    Hi.
    The Select statment accessing MSEG Table is Slow Many a times.
    To Improve the performance of  MSEG.
    1.Check for the proper notes in the Service Market Place if you are working for CIN version.
    2.Index the MSEG table
    2.Check and limit the Columns in the Select statment .
    Possible Way.
    SELECT MBLNR MJAHR ZEILE BWART MATNR WERKS LIFNR MENGE MEINS
    EBELN EBELP LGORT SMBLN BUKRS GSBER INSMK XAUTO
    FROM MSEG
    INTO CORRESPONDING FIELDS OF TABLE ITAB
    WHERE WERKS EQ P_WERKS AND
    MBLNR IN S_MBLNR AND
    BWART EQ '105' .
    Delete itab where itab EQ '5002361303'
    Delete itab where itab EQ  '5003501080' 
    Delete itab where itab EQ  '5002996300'
    Delete itab where itab EQ '5002996407'
    Delete itab where itab EQ '5003587026'
    Delete itab where itab EQ  '5003587026'
    Delete itab where itab EQ  '5003493186'
    Delete itab where itab EQ  '5002720583'
    Delete itab where itab EQ '5002928122'
    Delete itab where itab EQ '5002628263'.
    Select
    Regards
    Bala.M
    Edited by: Bala Malvatu on Feb 7, 2008 9:18 PM

  • Report Performance Issue and Strange Execution Log Data

    Today we have had a report suddenly start taking a long time to execute.
    Looking at the Report Server executionLog3 table/view we have the following information for the query in question. 
     <Connection>
          <ConnectionOpenTime>1</ConnectionOpenTime>
          <DataSets>
            <DataSet>
              <Name>ReportDataset</Name>
              <RowsRead>7</RowsRead>
              <TotalTimeDataRetrieval>150013</TotalTimeDataRetrieval>
              <ExecuteReaderTime>3</ExecuteReaderTime>
            </DataSet>
          </DataSets>
        </Connection>
    Supposedly the time taken to retrieve the data is around 150 seconds.  However, running a profiler trace while running the report in SSRS shows the query executing in under 1 second.  
    Indeed running a profiler trace for anything on the server with a duration greater than 60 seconds isn't returning anything.  I can only assume the above data is wrong when it says 150 seconds to retrieve the data.  IT IS taking that long to run
    the report though - so the question is - where is the time going? 
    Why can't I find a slow query on the server but SSRS thinks there is? 
    LucasF
    EDIT: This was fixed by restarting the report server.  Any ideas on why this might occur? 

    Hi Lucas,
    According to your description, you find the <TotalTimeDataRetrieval> in ExecutionLog3 is larger than the profiler trace time.
    In Reporting Services, to analyze the performance of the report, we usually check the TimeDataRetrieval to find the time we spend on retrieving the data. It’s the time needed for SQL Server to retrieve the data of all datasets in your report. So in your
    scenario, please check if the TimeDataRetrieval is equal to the time in profiler trace.
    Reference:
    More tips to improve performance of SSRS reports
    If you have any question, please feel free to ask.
    Best regards,
    Qiuyun Yu
    Qiuyun Yu
    TechNet Community Support

  • Performance Issue in Large volume of data in report

    Hi,
    I have a report that will process large amount of data, but it takes too long to process the data into final ALV table, currently im using this logic.
    Select ....
    Select for all entries...
    Loop at table into workarea...
    read table2 where key = workarea-key binary search.
    modify table.
    read table2 where key = workarea-key binary search.
    modify table.
    endloop.
    Currently i select all data that i need (only fields necessary) create a big loop and read other table to insert it to the fields in the final table
    Edited by: Alvin Rosales on Apr 8, 2009 9:49 AM

    Hi ,
    You can use field symbols instead of work area.
    If you use field symbols there is no need of modify statement.
    Here are two equivalent code:
    1) using work areas :
    types: begin of  lty_example,
    col1 type char1,
    col2 type char1,
    col3 type char1,
    end of lty-example.
    data:lt_example type standard table of lty_example,
           lwa_example type lty_example.
    field-symbols : <lfs_example> type lty_example.
    suppose if you have the following information in your internal table
    col1 col2 col3
    1      1    1
    1      2    2
    2      3    4
    Now you may use the modify statement using work areas
    loop at lt_example into lwa_example.
    lwa_example-col2 = '9'.
    modify lt_example index sy-tabix from lwa_example transporting col2.
    endloop.
    or better using field-symbols:
    loop at lt_example assigning <lfs_example>
    <lfs_example>-col2 = '9'.
    *here there is no need of modify statement.
    endloop.
    The code using field-symbols is about 10 times faster tahn using work areas and modify statement.

  • Performance issue on Fiscal period.

    HI all,
    I have multiprovider built on an infoset.The infoset is built on 3 stadard ODS(0FIGL_O02,0PUR_O01,0PUR_DS03).The user is running the report by Company code and Fiscal Period.
    The Company Code and Fiscal period is available only for FI-GL ODS.The purchasing os has only Fiscal Variant time characterstic.When i am trying to runt he report,its taking unusually long time to run.
    How should i resolve this performance issue?
    1)will getting Fiscal period into Purchasing ods help improve the performance.If so can anyone please give me step by step procees.As this is very urgent.
    2) Or should i take any other method to improve the performance.Th FI-GL already has secondary indexes on it.
    Please advise.
    Message was edited by:
            sap novice

    Duplicate post:
    Performance issue on AFPO
    Performance issue on AFPO

  • SQL Services 2012 Reporting Services Performance Issue - PowerView

    Power view Reports are loading very slow while opening in SharePoint 2013, it is taking more than 15 secs. It is development environment, maximum 10 users , no traffic at all but still not sure why it is taking such long time.
    We have 2 servers in SharePoint farm one is for SharePoint and other is for SQL. I have gone through the logs in reporting database, attached the same below. Can you please help me what we can say from the sheet attached ,whether it is slow or fast. Or where
    we are having issue.
    SQL server version is SQL 2012 SP2.
    SharePoint 2013 is RTM.
    Gone through the below blogs but no luck.
    http://blogs.msdn.com/b/psssql/archive/2013/07/29/tracking-down-power-view-performance-problems.aspx
    https://social.msdn.microsoft.com/Forums/sqlserver/en-US/4ed01ff4-139a-4eb3-9e2e-df12a9c316ff/ssrs-2008-r2-and-sharepoint-2010-performance-problems
    Thanks.
    Thanks, Ram Ch

    Hi Ram Ch,
    According to your description, your have performance issue when running your Power View report. Right?
    In this scenario, based on your screenshot, it takes long time on data retrieval. How is the performance when executing the query in SQL Server Management Studio? Since you mention there's no traffic at all and 15 seconds will not cause query
    time out, we suggest you optimize the query for retrieving data. Please refer to links below:
    Troubleshooting Reports: Report Performance
    Please share some detail information about the data query if possible. Thanks.
    Best Regards,
    Simon Hou

  • C30/C300 performance issues

    Hello,
    I've got 2C30s + 1C300 on an ISP network and these are being used for both incoming and outgoing mails.
    Recently, we started having performances issues where the workqueue was paused several times daily(reason paused on antivirus,antispam,etc). This eventually causes the workqueue to backup like 10k-20k and the units don't process mails rapidly.
    I also noted some viruses(i.e: MyTob) being detected and was wondering whether IronPort/Sophos engine is not being able to scan the messages properly, thus resulting in this huge performance issue.
    We also get lots of sophos timeouts daily. it's set to 120 seconds.
    RAM comes up to 60%, even if traffic is not that huge.
    Has anyone experienced a similar problem?
    Thanks,
    Vinesh

    Hi,
    Here's a sample of the mail logs.
    I did increase/decrease the antivirus timeouts, but no changes.
    It seems that it has difficulty scanning the files.
    Thu Nov 29 15:43:39 2007 Info: Start MID 233665168 ICID 702663276
    Thu Nov 29 15:43:39 2007 Info: MID 233665168 ICID 702663276 From:
    Thu Nov 29 15:43:39 2007 Info: MID 233665168 ICID 702663276 RID 0 To:
    Thu Nov 29 15:43:47 2007 Info: MID 233665168 Message-ID '<6d9jas>'
    Thu Nov 29 15:43:47 2007 Info: MID 233665168 Subject 'Error'
    Thu Nov 29 15:43:47 2007 Info: MID 233665168 ready 64728 bytes from
    Thu Nov 29 15:44:49 2007 Warning: MID 233665168: scanning error (name=u'doc.scr', type=executable/exe): viewer bailed out
    Thu Nov 29 15:44:49 2007 Info: MID 233665168 matched all recipients for per-recipient policy DEFAULT in the outbound table
    Thu Nov 29 15:45:03 2007 Info: MID 233665168 interim AV verdict using Sophos VIRAL
    Thu Nov 29 15:45:03 2007 Info: MID 233665168 antivirus positive 'W32/Mytob-C'
    Thu Nov 29 15:45:03 2007 Info: Message aborted MID 233665168 Dropped by antivirus
    Thu Nov 29 15:45:03 2007 Info: Message finished MID 233665168 done

  • Iphone 4s 5.0.1 SMS takes longer to receive

    Hello,
    im using iPhone 4s with iOS 5.0.1 and its my first iphone so i dont know if this is normal or not. Let me explain:
    Sometimes it takes hours to receive an sms or several, i dont if its a bug or what. Normally it takes longer to get a normal sms text than other devices, iv compared with Nokias etc it takes longer like 1 ou 2 minutes longer its strange for a TOP notch device like this and a simple task.
    Its very boring, apart for oither well knowed bugs that i have in the iOS device version, this one is the strangest. And also the Network operator signal peaks are irregular, not good signal in some cases and the phone calls drop and not hearing nicely like my old phones..
    Please correct this, its kind of lame for a Phone to have these errors, if i wanted an iPod i would have get one.
    Not looking good Apple, stop releasing noob updates and jailbrake failsafes and concentrate on real bugs, its ridiculuous like i said. I have all the other 5.0.1 bugs, reported ones, 2 rings on SMS, a second 2 rings 2 or 3 minutes of the same sms just the ring the sms doesnt repeat, and other stupid bugs...
    I though Apple was not noobish like Android, thats why i picked Apple device
    Thanks for reading

    I can do better than that, the VODAFONE Portugal as a software called Vodafone WebPhone, where you have a virtual phone connected online. You can receive the SMS there, normally the software gets the message 5 seconds before Nokias for example, but with iphone you cant imagine the diference, like 1 minute never less than that.
    Yesterday it was hours it was not normal, but a phone cant have this problems, the messeges must be instant SMS, and evey rest of them. I bought a top notch telephone not a portable internet device only
    Thanks for searching in my behalf i also searched and got nothing, but maybe its normal for iphones i dont know.

Maybe you are looking for

  • Duplicate messages in sender mail adapter

    Hi, We are using the sender mail adapter to retrieve messages from an IMAP mail server. Messages are being picked up correctly, but when we have large messages (with e.g. PDF attachments) things go wrong. The time it takes to retrieve the mail messag

  • Transporter allocation

    I want to assign my transporter to a sale order so i can get details like which transporter allocate to which sale order. I want to generate report also on that transporter, sale order which are already process or are in process in plant.

  • Database Parameter _ktb_debug_flags in 11.2.0.2

    HI Folks, Could  Anyone  Please explain the importance of  Database Parameter _ktb_debug_flags in 11.2.0.2 .  This parameter has been  recommended Early Watch Report of ECC System. Regards, Sharma TVK

  • Organizing my iPhoto

    Is it just me, or is iPhoto really difficult to be organized in? First of all, whenever I take a picture in the Photo Booth and drag it to my iPhoto, and later I go through Photos > iTunes > Originals > etc., they all are individually in Roll 1, Roll

  • Loss of web journals in iOS8 using iPhoto

    I have enjoyed creating web journals when travelling using iPhoto. With the update to iOS8, iPhoto has disappeared and we have the inferior Photos. And no web journals. So, I have web journals in the Cloud but I can't update or modify them. Is anyone