Other or cache

In other I guess? if you do a search on a keyword on iphone it displays hundreds of deleted emails and text messages how do you clean this cache? iphone support said I would have to plug into a computer and let a tech. do it which I felt was an privacy issues, IS there any way to clean this up

If you are using the iphone as it is deigned, you should lose nothing.
You should be syncing contacts/calendars/notes with your computer regualrly.
You should be importing all pics taken with the iphone to your computer regularly.
You should be transferring any itunes purchases made on your iphone to your computer regularly.
Everything should be on your computer.

Similar Messages

  • Update to near cache value is not reflected in the other near cache

    Hi,
    I have a cluster of two weblogic servers running within them tangosol cache. I also have two seperate tangosol servers running on the same two servers.
    When I make an update on the tangosol cache on one of the weblogic application, unfortunately the update is not picked up by the other weblogic application tangosol cache.
    What do I need to do so that all updates are available in all the tangosol cache?
    example below:
    initial:
    server1: weblogic tangosol near cache, key = "One", Value = null
    server2: weblogic tangosol near cache, key = "One", Value = null
    update on server1;
    server1: weblogic tangosol near cache, key = "One", Value = "New York"
    server2: weblogic tangosol near cache, key = "One", Value = null <- still null (?)
    Why is the value on server2 not updated also?
    Thanks.
    The tangosol configuration is a near distributed cache. The xml is below. Thanks.
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <!--
    Caches with any name will be created as default replicated.
    -->
    <cache-mapping>
    <cache-name>*</cache-name>
    <scheme-name>default-near</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <!--
    Default simple Near caching scheme with default eviction local cache
    in the front-tier and a non-expiring distributed cache in the back-tier.
    -->
    <near-scheme>
    <scheme-name>default-near</scheme-name>
    <front-scheme>
    <local-scheme>
    <scheme-ref>default-eviction</scheme-ref>
    </local-scheme>
    </front-scheme>
    <back-scheme>
    <distributed-scheme>
    <scheme-ref>default-distributed</scheme-ref>
    </distributed-scheme>
    </back-scheme>
    <!--
    invalidation auto = all changes in back caches gets notified
    present = subscribes to only changes thats held in near cache
    -->
    <invalidation-strategy>auto</invalidation-strategy>
    <!-- the cache server starts does not start all the schemes -->
    <autostart>false</autostart>
    </near-scheme>
    <!--
    Default Distributed caching scheme.
    -->
    <distributed-scheme>
    <scheme-name>default-distributed</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <class-scheme>
    <scheme-ref>default-backing-map</scheme-ref>
    </class-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <!--
    Default backing map scheme definition used by all the caches that do
    not require any eviction policies
    -->
    <class-scheme>
    <scheme-name>default-backing-map</scheme-name>
    <class-name>com.tangosol.util.SafeHashMap</class-name>
    <init-params></init-params>
    </class-scheme>
    <!--
    Default eviction policy scheme.
    -->
    <local-scheme>
    <scheme-name>default-eviction</scheme-name>
    <eviction-policy>HYBRID</eviction-policy>
    <high-units>100000</high-units>
    <low-units>0</low-units>
    <expiry-delay>0</expiry-delay>
    <flush-delay>0</flush-delay>
    <cachestore-scheme></cachestore-scheme>
    </local-scheme>
    </caching-schemes>
    </cache-config>

    We can certainly set up a call to discuss what you are seeing. Please email [email protected] with your phone number and other information to set up the call.
    The more information that you can provide in advance (on the forum or by email), the better. For example, send your configuration settings (cache configuration, cluster configuration) files.
    Peace.

  • How to clear the OTHER memory cache

    Hi there fire fox help,
    I have read the page on 'How to clear the cache' but this does not seam to clear everything.
    When I am editing a website on my computer and viewing the changes through fire fox everything works great, I just hit refresh.
    But when testing the websites changes online (after unloading it) I can not see any changes, using the refresh button is of no help and clear the cache is not helping either.
    I get quite frustrated with this as I then have to change the name of the web page just to get fire fox to refresh it properly and then change it back again.
    Thanks for your time, Max

    I've tried all these and Firefox still won't load the current image. I know it's there because Internet Explorer does load it. Is there anything else I can try or look at?

  • OT: Netflix App or Other Rental Cache App

    Curious if the Netflix app can cache content on the K1 or not? If not, does the Android have an app that can do this? I travel and would like to preload movies to watch before leaving for a trip. TIA

    dyates9770,
    Will ask about that.  I did get an answer on the cache... sounds like no go.   Sorry.   
    Mark
    ThinkPads: S30, T43, X60t, X1, W700ds, IdeaPad Y710, IdeaCentre: A300, IdeaPad K1
    Mark Hopkins
    Program Manager, Lenovo Social Media (Services)
    twitter @lenovoforums
    English Community   Deutsche Community   Comunidad en Español   Русскоязычное Сообщество

  • BI Web Template Caching on EP server

    Hello,
    We are developing BI web templates in the BI Web Application Designer and then previewing (via publishing) them with the EP web server. When we make a change to the report/template and republish/preview it to the EP server, the old version remains in the EP web server cache.
    Also to note, this is PRIOR to creating BI iviews and incorporating them in EP, so our assumption is that the cache settings in EP iviews do not apply in this case, but that it a broader issue with the EP web server or application itself.
    So far, we have tried clearing the overall PCD EP cache under sys admin support section and that does not work. The only way to get the latest content to show up via preview/republish is to restart the EP instance which clears the cache.
    Is there any other place cache can be cleared or a setting that will allow the BI previewer and ALL content (i.e. iviews as well) to ALWAYS get the latest and greatest version??
    Thanks in advance for any help.
    Dave K.

    hi Dave,
    cpl of questions are you on a federated network or are you using any proxy to access the portal? usually it will be slightly delayed some times based on the proxy cache. how are you adding the reports to role through delta link or copy?
    When using the portal cache, data updates can appear with a delay.
    pls go through the following link it may help you.
    http://help.sap.com/saphelp_nw70/helpdata/EN/25/8c174082fe1961e10000000a155106/content.htm
    Jo

  • Unable to run project on upgrading to azure sdk 2.3 while using cache

    We upgraded azure sdk from 1.8 to 2.3 and we are unable to run our project.
    following is the error.
    Attempt by method 'Microsoft.Web.DistributedCache.DataCacheFactoryWrapper.CreateDataCacheFactoryConfiguration(System.String)' to access method 'Microsoft.ApplicationServer.Caching.DataCacheFactoryConfiguration..ctor(System.String)' failed.
    However if we comment out the following lines of code in web.config we are able to run the project.
    <sessionState mode="Custom" customProvider="AFCacheSessionStateProvider">
    <providers>
    <add name="AFCacheSessionStateProvider" type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache" cacheName="AppName" dataCacheClientName="default" applicationName="AFCacheSessionState"/>
    </providers>
    </sessionState>
    Below is the screen shot of error :

    Hi Jambor,
    Thanks for the reply.
    We have no other sessionState caching configured other than this one in WebRole.
    So, in webrole we.config, we have:
    <section name="dataCacheClients" type="Microsoft.ApplicationServer.Caching.DataCacheClientsSection, Microsoft.ApplicationServer.Caching.Core" allowLocation="true" allowDefinition="Everywhere" />
    <section name="cacheDiagnostics" type="Microsoft.ApplicationServer.Caching.AzureCommon.DiagnosticsConfigurationSection, Microsoft.ApplicationServer.Caching.AzureCommon" allowLocation="true" allowDefinition="Everywhere" />
    <sessionState mode="Custom" customProvider="AFCacheSessionStateProvider">
    <providers>
    <add name="AFCacheSessionStateProvider" type="Microsoft.Web.DistributedCache.DistributedCacheSessionStateStoreProvider, Microsoft.Web.DistributedCache" cacheName="AppName" dataCacheClientName="default" applicationName="AFCacheSessionState"/>
    </providers>
    </sessionState>
    <dataCacheClients>
    <dataCacheClient name="default">
    <!--To use the in-role flavor of Windows Azure Cache, set identifier to be the cache cluster role name -->
    <!--To use the Windows Azure Cache Service, set identifier to be the endpoint of the cache cluster -->
    <autoDiscover isEnabled="true" identifier="appname.cache.windows.net" />
    <!--<localCache isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="300" />-->
    <!--Use this section to specify security settings for connecting to your cache. This section is not required if your cache is hosted on a role that is a part of your cloud service. -->
    <securityProperties mode="Message" sslEnabled="true">
    <messageSecurity authorizationInfo="{authInfo}" />
    </securityProperties>
    </dataCacheClient>
    </dataCacheClients>
    <cacheDiagnostics>
    <crashDump dumpLevel="Off" dumpStorageQuotaInMB="100" />
    </cacheDiagnostics>
    And here are the installers that are installed in our win7 system:
    Windows Azure Authoring Tools - v2.3
    Windows Azure Compute Emulator - v2.3
    Windows Azure Libraries for .NET - v2.3
    Windows Azure Storage Emulator - v2.3
    Windows Azure Storage Tools - v2.2.2
    Windows Azure Tools for Microsoft LightSwitch for Visual Studio 2012 - October 2012
    Windows Azure Tools for Microsoft Visual Studio 2012 - v2.3
    EDIT: Tried with
    Windows Azure Tools for Microsoft LightSwitch for Visual Studio 2013 - v2.3
    But no luck.
    Please let us know if we are missing something.
    Thanks,
    - Sovan
    sovan kumar das

  • DNS cache " Name Does not Exist"

    Hey Guys,
    So we've been experiencing a really weird issue related to the DNS for past couple of months. Here are the details:
    1) Our domain machines are Windows 7 Enterprise and their DNS points to Windows DNS Servers
    2) For companyxyz.net internal sites, the Windows DNS resolves those from its
    companyxyz.net zone.
    3) For public *.companyxyz.com records, the Windows DNS has conditional forwarders to point these requests to our Linux Bind Servers. And than the authoritative name servers respond to these queries accordingly
    4) Our internal employees use the public records such as testing.companyxyz.com 
    Problems:
    1) Employees on the internal network would randomly experience page not found on their browsers while trying to hit
    testing.companyxyz.com. When we try to ping this URL, ping would fail too. However, NSLOOKUP would work perfectly fine and return the correct results. ipconfig /flushdns fixes the issue right away
    2) During the time when this problem is occurring, if I look into the local cache ( ipconfig /displaydns), I find an entry saying:
        testing.companyxyz.com
        Name does not exist. 
    ipconfig /flushdns obviously clears out this record along with the other local cached records and fixes the issue.
    3) Point the local computers directly to the Linux Bind servers as DNS never create this issue. It's only when they are pointing to the Windows DNS and going to this public record. The problem also seems to occur a lot more frequently if there are considerably
    high number of hits to this URL.
    Have you guys experienced this issue before? I am looking for a fix for this issue and not having the end-users to flush their dns constantly. Also note this problem occurs sometimes once a day, or 2 -3 times a week. It's very random.
    Thanks.
    Bilal
     

    Hi,
    It seems that the issue is related to your Windows 7 client. Considering whether there is DNS attack or virus on this computer.
    Please try to do the safety scan first.
    Please monitor the DNS server performance referring these article:
    Monitoring DNS server performance
    http://technet.microsoft.com/en-us/library/cc778608(WS.10).aspx
    Monitoring and Troubleshooting DNS
    http://www.tech-faq.com/monitoring-and-troubleshooting-dns.html
    For further step, we need to capture the traffic by using Network monitor when the issue happened and we continuously ping
    testing.companyxyz.com.
    Microsoft Network Monitor 3.4
    http://www.microsoft.com/en-us/download/details.aspx?id=4865
    Let’s see whether there is DNS request happened and the DNS request is handled.
    You can post back the save traffic log here for our further research.
    Kate Li
    TechNet Community Support

  • Wccp web-cache -- can't get it working

    I installed a Squid based caching appliance, by Stratacache. it supports GRE wccp redirect in transparent mode, I have it configured as wccpv2 using the Router's LAN ip address 10.250.1.2.
    Every time I turn on the caching for a host (or the entire LAN) the internet breaks for whomever I turn wccp on. I have tried disabling CEF and have moved the cache to it's own router interface.
    Topology of the Cisco 2801-SEC-K9 router, running 12.4(22)T advsecurity
    FastE 0/0 (10.250.1.1) ---> connected directly to cache server
    FastE0/1 (10.23.1.1) ---> Connected to internal LAN
    MultiLink1 (12.x.x.98)  ---> 4 T1 multilink to AT&T Internet Service
    so here is my config,
    ip wccp web-cache redirect-list 46 group-list 40 password webcache
    ip wccp version 2
    access-list 40 permit 10.250.1.2 (cache server)
    access-list 46 permit 10.23.1.21 (test host for wccp)
    interface fastethernet0/1
    ip wccp web-cache redirect in
    here is the output from the router
    Roosevelt-2801(config)#do sh ip wccp web-cache view
        WCCP Routers Informed of:
            12.x.x.98
        WCCP Clients Visible:
            10.250.1.2
        WCCP Clients NOT Visible:
            -none-
    Roosevelt-2801(config)#do sh ip wccp web-cache det
    WCCP Client information:
            WCCP Client ID:          10.250.1.2
            Protocol Version:        2.0
            State:                   Usable
            Redirection:             GRE
            Packet Return:           GRE
            Assignment:              HASH
            Initial Hash Info:       00000000000000000000000000000000
                                     00000000000000000000000000000000
            Assigned Hash Info:      FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
                                     FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
            Hash Allotment:          256 (100.00%)
            Packets s/w Redirected:  914
            Connect Time:            1d18h
            Bypassed Packets
              Process:               0
              CEF:                   0
              Errors:                0
    Roosevelt-2801(config)#do sh ip wccp web
    Global WCCP information:
        Router information:
            Router Identifier:                   12.x.x.98
            Protocol Version:                    2.0
        Service Identifier: web-cache
            Number of Service Group Clients:     1
            Number of Service Group Routers:     1
            Total Packets s/w Redirected:        7800
              Process:                           94
              CEF:                               7706
            Service mode:                        Open
            Service Access-list:                 -none-
            Total Packets Dropped Closed:        0
            Redirect Access-list:                46
            Total Packets Denied Redirect:       8195426
            Total Packets Unassigned:            0
            Group Access-list:                   40
            Total Messages Denied to Group:      14
            Total Authentication failures:       8
            Total Bypassed Packets Received:     0
    So I can see the packets redirected, the cache never sees them, the router and cache can ping each other, the cache and LAN clients can ping each other - am I missing something?

    so I found the problem... hopefully this helps somebody else in the future... the problem is the redirected packets were sourced from the router multilink1 interface IP address and the cache was expecting them from the router fa0/0 interface, so it dropped them.
    also the cache has a "spoof client IP" option that was on, because we prefer to do this for netflow, but, I don't think client-IP-spoofing works with the standard web-cache wccp service. It was causing internet problems so I turned the spoofing off and it works fine...
    hope this helps

  • Clear Windows local cache

    Hi,
    After a 10MB file transfer across a WAN from DC to branch office with WAEs in inline intercepting mode, i noticed subsequent transfers were exetremely fast even without the WAAS appliances interception. It appears Windows OS was also doing some local caching. I have checked and cleared the Temp Folder and it contents but there is no change.
    Are there any other Windows Cache locations? How do I solve this?

    Obiora,
    The Windows redirector uses some caching operations for read and write requests, but there isn't a cache of the file that is kept.
    Are you sure the WAEs were not handling the traffic?
    Zach

  • Best way to keep a 'history' of cache changes?

    Simple question, really, which I guess many people have run into before, so I'm looking for a bit of 'best practice' as regards Coherence.
    We have a distributed cache which is holding financial data (Portfolio Positions), and we plan to update these using Entry Processors for scalability (as a single incoming Trade could affect multiple Positions, so we want them processed in parallel). So far, so good. (I hope! If you have a better approach, please feel free to add it. :))
    Now, each Position that is modified needs to be 'audited', so we have a 'before' and 'after' image. This needs to be persisted. I have currently created a separate cache - 'PositionHistoryCache' - and set it up so it's flushed to Oracle in a "write behind" manner. This seems to work OK - i.e. updating this 'other' distributed cache from within the Entry Processor works fine. Does this seem sensible as an approach as regards keeping 'history' - i.e. using a separate cache and 'put'ing to it?
    Also, I'm keen not to run into any 'reentrancy' problems in our application. So what's the general rule here, when we are using Entry Processors elsewhere? Is it simply the 'service name' that determines whether the distributed caches are served by different service threads? In other words, as long as the 'history' cache we are trying to talk to is declared with a different 'service-name' to the cache that has the calling Entry Processor we can freely 'put' to it without issue?
    Many thanks if you can help clear up the above design issues.

    Hi Steve,
    yes, the (possibly inherited) service name for the cache scheme determines which cache service a cache belongs to.
    As for best practice, you probably would want to use key-affinity and the cache same service for the audit cache and try to put the data into the backing map directly. Since this is inserts and child records we are speaking about (access to the audit record is demarcated by access to the to-be-audited record, and if you do it from an entry-processor then the audit entry is always local because it is affine to the to-be-audited entry), it should be safe, provided that you only ever insert/update the audit entry into the backing map from entry processors manipulating the parent entry.
    You would still have the same failure cases as the different cache service cache.put approach: if you crash after the audit record has been inserted but process did not finish, then you may end up with lost but audited updates or duplicate audit records for a single change.
    Note that this is an advanced functionality and you would probably want to consult with your Oracle support representative to ensure that you know the implications this approach brings with itself with each Coherence version you try to use it with.
    An alternative approach would be to move audit into the same cache as the record and use key affinity to ensure audit records reside on the same node as the audited record, and use EntryProcessors sent to both the changed and the audit entry keys together to update both records together atomically. This is a much safer approach, it is guaranteed to be atomic as long as only the cache is concerned, on the other hand you need to know the audit entry key in advance and use the key-based invokeAll method (you can't use the filter-based invokeAll method as that cannot add new entries). Also you have additional work if you use filter-based read operations to filter out audit records from query results.
    Best regards,
    Robert

  • Best Use of one SSD  for cache drives

    Hello...  Months ago I added two Physical  Momentus XT drives to be used  as Cache only drives:  One as a Cache disk, and one as a Cache Database disk.  This is the only function these drives do...
    Doing this really sped up large session loads...
    I'm going to be getting one SSD soon, and was wondering.... Of the Two Caches....  Cache, or Cache Database...
    Which would benifit the most from an SSD over a hybrid hard drive??
    Seeing that the Controller is a 3Gb/sec controller only....  That's why I used two separate Cache disks..
    Now that I'll have the benifit of a SSD speed over a Hybrid Drive (Faster Reads only)... Should I combine both in the SSD or still keep them separate?
    Thanks!    Jan

    I have replaced 2 Momentus XT drives (One with Media Cache, and other with Cache Database)  with 1 Samsung 840 SSD, which has 2 folders: CS6 Cache, and CS6 Cache Database.
    Prior to pulling the drives, I tested the time it took to load a 45 min 1080i AVCHD session that had over 400 scenes, 600 2k JPEG's and about 25 pieces of music.  From clicking on the session to load, until "All Media Loaded" took about 34 seconds (This is with the Cache's on separate Hard drives).  Shut system down, pulled the two drives, added the one SSD drive, power up, formatted the drive and added the above mentioned folders...  Then launched PPRO and pointed the Media Cache and Cache Database at these new locations... Then launched the 45 min session, and walked away while it created the data locations, peek files, etc, to the new SSD Cache Drive folders.
    When It was finished, I closed, rebooted the system, Launched PPRO, and timed from the Click on the session, till the end of "All Media Loaded"...     19 seconds...
    Will be watching this intently......  
    EDIT:   SO far, I'm happy with the change.... I am interested in anyone else who has done this sort of thing before.... and in paticular, if you found any negatives of putting both cache's on the same physical drive, even if it is a SSD....
    I did find my prior notes, and going from Cache's on raid (Different directory than media, though media was on same raid) to separate Momentus XT Drives decreased load time of this same 45 min project by ~ 15 seconds..
    $$$ Available was only enough for one SSD, and I'm not certain if multiple SSD's for cache's would be worth the $$.... Which is pretty much why I posted here....
    Looking forward to your comments...

  • PS6: get message, purge cache in preference while using Bridge, everything freezes.

    message says: please try purging the central Cache preference to correct situation. This comes up in Bridge.
    I tried the purge, but nothing happens, Bridge freezes, and I can't exit out.

    Restart Bridge holding down Ctrl (win) or Cmd (mac) and choose reset preferences and purge cache.
    If that doesn't work, try looking through this other thread: Cache issue with CS6 Bridge

  • Temp/Cache files in OAS/FORMS

    Hi,
    I'm running out of disk space in an old server and, after searching for the biggest folders, I found two that take most of the space:
    C:\app_1\application1 ----> In this folder there are other subfolders (cache, fmb, fmx, icon, lib, mmb, mmx, rdf...), being cache the biggest one.
    C:\oracle\ora806\FORMS60 ----> There are lot of files here, having random names starting with s (s43o.5, s1ak.6, and so on)
    Do you know if it's safe to delete those (apparently) temp/cache files in cache and forms60 folders?
    Thanks

    lol thanks. i was wondering how to find them. i used the explorer search but the pp cace files didnt show up

  • Live cache data deletion

    Hi Friends,
    We are facing one strange problem with our Live cache.
    We have DP data in Livecache. due to some reasons planning results are corrupted.
    So we have deleted time series for DP planning area and recreated time series.
    Here what we found is even after deletion and recreation of time series for planning area still
    the key figure data exists in live cache and can see in interactive planning book.
    Can some one please give some hints what went wrong.
    Any other Live cache related programs or jobs need to run to get rid of the problem?.
    Regards
    Krish

    Hi Thomas,
    You are right. after running /SAPAPO/TS_PSTRU_CONTENT_DEL all CVCs for the POS are deleted
    and there are 0 CVC left.
    but again i have Generated CVCs from Infocube (possibly same old combinations are generated since the infocube is same).
    Now i have initialized planning area and then accessed the selection id in interactive planning.
    Surprisingly still i can see key figure values in planning book.
    Here are steps what i did:
    1. Deleted time series for PA (de initialization)
    2. Deleted CVCs from POS
    3.Deactivated POS and reactivated POS
    4.Generated CVCs again from Infocube
    5.Created time series for PA (re initialize)
    6.Loaded selection id in interactive PB.
    7.Still i can see key figure data for loaded CVCs.
    Some how the data is not getting cleaned in livecache inspite of doing above steps.
    data might be stored superfluously in live cache.
    I want to know any additional reports are available to get rid of  these kind of issues.
    Regards,

  • Unable to cache

    hi i am unable to cache any document through my oracle web cache.
    only the default documents like with extensions .gif and others are caching.
    even i am unable to cache my pdf's.
    seems like some concept is missing.my caching rule is showing none
    even though i have set it.
    anaybody can please clear it out step by step please how to cache a document
    regds
    manish

    Well, i faced this error "unable to cache" once before and the reason was that jar files which need to be cached at client side had not timestamp assigned to them on my linux system where application server was installed, the reason which i came to know was that i coppied them from a windows system. To resolve this problem at that time, i coppied them on my linux server and then pasted again there and this time my linux server generated TIMESTAMP for my those files and it started working fine.
    This time i really didnt know what happend but now i am gona tell you a strange story. If i use http://rg:1810 (in firefox browser) i can see admin page of my application server control and if i use http://rg.lhr.systemsltd.com:1810, it displays error "Time out ouucred......" (This kind of error).
    But inter explorer doesn't report any error and works fine.
    now in my error message you can see that first APPLET goes to "http://rg.lhr.systemsltd.com:7779/forms90/java/webutil.jar" to cache the file and as this is using a fully qualified name of my server so it is unable to find that server as MOZILA did and returned connection time out type of error message.
    Now i changed the server name and then did a fresh installation and it started working fine. It means there is some problem with my DNS which is causing application applet could not find files if fully qualified name of server is used.
    regards

Maybe you are looking for

  • In ODI_DEMO model, couldn't find trg_customer table and the parameters_FILES datastore

    I'm following the odi-12c-getting-started-guide-2032250. But after I imported the demo environment, I couldn't find the parameters_FILES datastore. and under the Sales Administrator datastore, I couldn't find the trg_Customer table that I already cre

  • Errors-Ora-01157,ORA-01110,ORA-01033

    HI all, i am unable to login into database,i will explain clearly what has happened,please help me out...my /home directory was 100%,so i was unable to login,what i have did is i have not deleted archive files but i have moved to other directory /ora

  • JFileChooser and links .lnk

    Hi. I'm having a problem with JFileChooser. When I try to click on a shortcut I get the file .lnk selected instead of the directory to it associated. Do you know how can I do this task? Alternatively I could use FileDialog..but, does it support Multi

  • Sequence replicator bug : when source image has "colorize"

    I ran into this funky thing today. Easy to repeat: 1: take any simple image as your source - I imported a small image from photoshop (a PNG). 2: Colorize it 3: Replicate it. 4: Add a sequence replicator - I added a scale sequence from 0% TO 100%. Rep

  • Can't turn on face time, login just keeps popping up?

    hi, on my MBP it works as far as login goes..... but on my i4S, i login, select the only email there it verifies for a couple seconds then jumps back to the login. Does this app NEED wifi to operate? on the MBP it doesnt. thank you in advance