Load Up issue

When loading up my Fully Flash site as a preview in explorer
(f12) I have to click “allow blocked content” due to it
using action script or activeX. I fear that this will cause a big
problem with some users when I launch the site? Will this be a
problem or is there a simple fix?

Hi, I have the same problem like thousands of others. Been
trying to resolve over past 2 days, so here's what i founnd. May be
you also using version MX2004 as I am (Flash 7). I can not get hold
of the Flash update 7.02 (Adobe no longer distribute it, can anyone
provide that update file). Once that update is applied, you can
then install Hotfix 2, and this stops IE displaying the Active X
warning when previewing locally.
Do you know about the Deconcept method to embed Flash and swf
files in a webpage - see "
http://blog.deconcept.com/swfobject/"
and follow the latest method for SWFObject. The root problem is
when IE sees an unsigned certificate for the Flash item, it forces
the warning dialogue box to bring to attention the item is unsigned
and maybe risky. But as Thawte and Verisign certs cost money, most
people don't get digi signatures, so IE (and Moz I think) display
the Active X warning.
I found several solutions, try googling "Stop IE display
Active x". Here is a search result:
http://www.google.co.uk/search?hl=en&q=stop+IE+active+x+warning&btnG=Google+Search&meta=
Here are a couple solutions, the first two is the simplest:
http://board.flashkit.com/board/archive/index.php/t-716265.html
http://www.softcomplex.com/forum/viewthread_2867/
http://answers.yahoo.com/question/index?qid=20080610101137AAmhDHu
Regards Jonathan

Similar Messages

  • Firmware update for HP OfficeJet Pro 8000 wireless to fix load paper issue - as for the 8500?

    Is there any hope that that soon will be a firmware update to fix the issues loading paper and the printheads for the HP OfficeJet Pro 8000 wireless printer? 
    A month or two ago there came a firmware update to the HP OfficeJet Pro 8500 wireless, which has some of the same issues,
    and allegedly should have the much of the same load-mechanism as the 8000 printer.
    It would be great, as we are on the fourth (4) replacement of this HP 8000 printer. For example, it would be great if we could use normal (80g) paper and not has to use (HP) 90g paper, as this prevents us from using the duplexer and only print on one side.

    Thanks for the advice to contact HP tech support!
    I have tried in multiple ways since November 2009 to contact HP, when I got the first (of 4) printers. The first 3 printers was subsequently replaced.
    Finally in late June 2010 I got the message that there was coming an firmware update for Windows, but not for Mac's. The firmware should improve among other things the paper pick up (load paper issue) and the print-heads life.
    I have applied the Windows firmware on the printer from a pc, but still seems to have load-paper issues, even using HP Bright White paper, when printing from Mac's. I have not tested the printing from a pc, but assumes that a firmware upgrade the printer, and are not specific to a certain OS.

  • ITS load balancing issue

    Hi all,
    During our testing we are getting a load balancing issue.  However, one of the agates in our network is has more CPU power than compared to the other agates in our ITS network.  The memory on all the agate servers is the same. 
    Our current issue we are getting is the one agate that has more cpu power but acquires more sessions as compared to the other two agates.  It roughly gets 60 more sessions per agate process as compare to the other Agate servers.  Does having more cpu on a Agate affect the load balancing on ITS?  We are on ITS patch level 19 with the Hotfix. 
    Thanks,
    Jin Bae

    Hello Jin,
    yes, at (re)initialize the WGate retrieves the capacity from the AGates.
    This is an accumulated number based on CPU performance and the number of CPUs!
    The number can be seen in "wgate-status" as the "Capacity" of the AGate.
    When running multiprocess Agates the number is retrieved from the MManager and also involves the number of agate-processes.
    The WGate dispatches the load in proportion depending on these capacity numbers.
    By my knowledge there is no way that these values can be configured (fixed).
    Regards,
      Fekke

  • SIP load balancing issue with ACE 4710

    SIP Load balancing Issue with ACE 4710
    I have a Cisco ace 4710 with vesion Version A4(2.2). i configued simple SIP load balancing first without stickiness. without stikeiness we are having a problem because bye packet at the was not going to the same server all the time that left our port in used even though user hang up the phone. its happen randmly. i have a total 20 licenced ports and its fill out very quickly. so i dicided to use the stickiness with call-ID but still same issue. below is the config
    rserver host CIN-VOX-31
      ip address 172.20.130.31
      inservice
    rserver host CIN-VOX-32
      ip address 172.20.130.32
      inservice
    serverfarm host CIN-VOX
      probe SIP-5060
      rserver CIN-VOX-31
        inservice
      rserver CIN-VOX-32
        inservice
    sticky sip-header Call-ID VOX_SIP_GROUP
      timeout 1
      timeout activeconns
      replicate sticky
      serverfarm CIN-VOX
    class-map match-all CIN_VOX_L4_CLASS
      2 match virtual-address 172.22.12.30 any
    class-map match-all CIN_VOX_SIP_L4_CLASS
      2 match virtual-address 172.22.12.30 udp eq sip
    policy-map type loadbalance sip first-match CIN_VOX_LB_SIP_POLICY
      class class-default
        sticky-serverfarm VOX_SIP_GROUP
    policy-map multi-match GLOBAL_DMZ_POLICY
       class CIN_VOX_SIP_L4_CLASS
        loadbalance vip inservice
        loadbalance policy CIN_VOX_LB_SIP_POLICY
        loadbalance vip icmp-reply
      class CIN_VOX_L4_CLASS
        loadbalance vip inservice
        loadbalance policy CIN_VOX_LB_SIP_POLICY
        loadbalance vip icmp-reply
    interface vlan 20
      description VIP_DMZ_VLAN
      ip address 172.22.12.4 255.255.255.192
      alias 172.22.12.3 255.255.255.192
      peer ip address 172.22.12.5 255.255.255.192
      access-group input PERMIT-ANY-LB
      service-policy input GLOBAL_DMZ_POLICY
    could you please help me on this...
    thanks
    Rakesh Patel

    I mean there should be one more statement-
    class-map type sip loadbalance match-any CIN_VOX_LB_SIP_POLICY 
    match sip header Call_ID header-value sip:
    and that will be called under-
    policy-map multi-match GLOBAL_DMZ_POLICY
       class CIN_VOX_SIP_L4_CLASS
        loadbalance vip inservice
        loadbalance policy CIN_VOX_LB_SIP_POLICY
        loadbalance vip icmp-reply
    is that missing in your config ?

  • CSS arrowpoint cookie load balancing issue

    Hi guys,
    I need some advice on a load balancing issue.
    We have connections hitting the CSS via a proxy environment. As a result i see only one source ip address. I want to use arrowpoint cookies for session stickeyness. However when i enable the rule the tcp session negotiation fails. The CSS sends a TCP/RST which terminates the session.
    Here's the rule config:
    content HTTP_rule
    add service ZSTS299102
    add service ZSTS281101
    vip address <filtered>
    add service LONS299102
    add service LONS281101
    balance weightedrr
    change service ZSTS299102 weight 5
    change service ZSTS281101 weight 5
    advanced-balance arrowpoint-cookie
    protocol tcp
    port 80
    url "/*"
    active
    Any help would be much appreciated.

    Remko,
    in L3/L4 the CSS sends the SYN directly to the server.
    So when the FIN comes in, we simply pass it to the server.
    With L5 the CSS spoofs the connection and we select the server only after receiving the GET.
    If there was some delay between the GET and the FIN, the CSS would have time to establish a connection with the server and the FIN could be simply forwarded.
    Unfortunately, in this case the FIN is right after the GET with no delay.
    Gilles.

  • Load monitoring issue

    Hi,
          in load monitoring process chain having  issue with 1 master data load, this issue containg  "Error message when processing in the Business Warehouse" and in monitor 2 duplicate records are there, how  can i solve the issue and continue the process chain, could you plz anyone sugess me in resolving the issue.
    Thanks!
    regards,
    Buvana.

    hii bhuvana,
    there is one option which u can try but am not sure if it is correct or not. go to DTP from there go to update tab and there u can see a option called "Handle duplicare record key" check that option. It should work. In the past when we had the similar error of duplicate records. we did it in the same way.
    regards,
    raghu.

  • ERPi Data load mapping Issue

    Hi,
    We are facing issue with ERPi data load mappings issue. Mapping file (txt file) has 36k records, whenever we are trying to load mappings, it's taking very long time, nearly 1 hour 30mins. but we want to reduce that time. is there any way to reduce data load mapping time??
    Hyperion verion: 11.1.2.2.300
    Please help, thanks in advance!!
    Thanks.

    Any one face the same kind of issue??

  • Loading performance issues

    HI gurus
    please can u help in loading issues.i am extracting  data from standard extractor  in purchasing  for 3 lakhs record it is taking  18 hrs..can u please  suuguest  me loading performance issues .
    -KP

    Hi,
    Loading Performance:
    a) Always load and activate the master data before you load the transaction data so that the SIDs don't have to be created at the loading of the transaction data.
    b) Have the optimum packet size. If you have too small a packet size, the system writes messages to the monitor and those entries keep increasing over time and cause slow down. Try different packet sizes to arrive at the optimum number.
    c) Fine tune your data model. Make use of the line item dimension where possible.
    d) Make use of the load parallelization as much as you can.
    e) Check your CMOD code. If you have direct reads, change them to read all the data into internal table first and then do a binary search.
    f) Check code in your start routine, transfer rules and update rules. If you have BW Statistics cubes turned on, you can find out where most of the CPU time is spent and concentrate on that area first.
    g) Work with the basis folks and make sure the database parameters are optimized. If you search on OSS based on the database you are using, it provides recommendations.
    h) Set up your loads processes appropriately. Don't load all the data all the time unless you have to. (e.g) If your functionals say the historical fiscal years are not changed, then load only current FY and onwards.
    i) Set up your jobs to run when there is not much activity. If the system resources are already strained, your processes will have to wait for resources.
    j) For the initial loads only, always buffer the number ranges for SIDs and DIM ids
    Hareesh

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Load-balancing issues with iPlanet and multiple clusters

    We're in performance test of a large-scale clustered deployment based on WLS 5.1sp10.
    Due to scalability/functionality issues, some of which we've seen firsthand and
    some of which we've been informed of by associates as well as BEA representatives,
    we've chosen to implement multiple clusters with a maximum of three nodes each.
    These clusters will be fronted by a web server tier consisting of iPlanet servers
    using the proxy plugin.
    Due to hardware constraints (both in test and in production), however, we've configured
    the iPlanet servers to route across the multiple clusters. In our test environment,
    for instance, we've got a single iPlanet server routing across two 3-node clusters,
    and the configuration in obj.conf is as follows:
    <Object name="application" ppath="*/application">
    Service fn="wl-proxy" \
    WebLogicCluster="clusterA_1:9990,clusterB_1:9991,clusterA_2:9990,clusterB_2:9991,clusterA_3:9990,
    clusterB_3:9991" \
    CookieName="ApplicationSession"
    </Object>
    Our issue is that the load-balancing doesn't appear to work across the clusters.
    We're seeing one cluster get about 90% of the load, while the other receives
    only 10%.
    So, the question (finally!) is: Is this configuration correct (i.e., will it
    work according to the logic of the proxy plugin), and is it appropriate for this
    situation? Are there other alternative approaches that anyone can recommend?
    Thanks in advance,
    cramer

    I use weblogic6.1 with sp2+windows 2000.I develop a web application and deploy
    it to cluster.Through HttpClusterServlets proxy of weblogic I found that a server
    in cluster almost get 95% of requests but another only get 5% of requests.Why???
    I don't set any special parameter.And the weight of the two clustered server is
    equal.I use round-robin arithmetic.
    Thanks!
    "cramer" <[email protected]> wrote:
    >
    We're in performance test of a large-scale clustered deployment based
    on WLS 5.1sp10.
    Due to scalability/functionality issues, some of which we've seen firsthand
    and
    some of which we've been informed of by associates as well as BEA representatives,
    we've chosen to implement multiple clusters with a maximum of three nodes
    each.
    These clusters will be fronted by a web server tier consisting of iPlanet
    servers
    using the proxy plugin.
    Due to hardware constraints (both in test and in production), however,
    we've configured
    the iPlanet servers to route across the multiple clusters. In our test
    environment,
    for instance, we've got a single iPlanet server routing across two 3-node
    clusters,
    and the configuration in obj.conf is as follows:
    <Object name="application" ppath="*/application">
    Service fn="wl-proxy" \
    WebLogicCluster="clusterA_1:9990,clusterB_1:9991,clusterA_2:9990,clusterB_2:9991,clusterA_3:9990,
    clusterB_3:9991" \
    CookieName="ApplicationSession"
    </Object>
    Our issue is that the load-balancing doesn't appear to work across the
    clusters.
    We're seeing one cluster get about 90% of the load, while the other
    receives
    only 10%.
    So, the question (finally!) is: Is this configuration correct (i.e.,
    will it
    work according to the logic of the proxy plugin), and is it appropriate
    for this
    situation? Are there other alternative approaches that anyone can recommend?
    Thanks in advance,
    cramer

  • Loading time issue

    hi,
    I have an issue with loading custom compont based on title window.
    Problem is creating component take too many large time.
    This component based on TitleWindow.
    And it included
    1.HBox which contain 15 buttons(actually it is also an separate component)
    2.TabNavigator which contain 9 tabs(7 of them are custom components)
    I have use creation policy all for the tabnavigator,because I can't perform my logic using on demand creation policy.
    Note:
    If I use creation policy auto,then I can't perform my logic.
    eg:cann't call reset methods of components, because sometimes,If I I didn't navigate then it has not been created and Object reference is null.then cann't help.
    component is very reusable in my system.But it takes too many time to load.
    What can I do? Any suggessons?

    If you want to go down the creation policy route, set the creation policy to NONE, the use the initialize methods to create them when you have data.
    I have a similar project (some code below). In the main <mx:Application/> block, I have a preinitialize method that gets the initial data, the one that is good for tabone then call VSMain.initialize(). This creates the viewstack but only TabOne. The others are created when they are accessed for the first time.
    <mx:VBox width="990" paddingLeft="0" paddingRight="0" horizontalCenter="0" height="570" horizontalScrollPolicy="off" verticalScrollPolicy="off">
    <mx:ViewStack id="VSMain" width="990" height="570" selectedIndex="0" creationPolicy="none">
         <s:NavigatorContent id="one" label="One">
              <main:TabOne/>
         </s:NavigatorContent>
         <s:NavigatorContent id="two" label="Two" creationPolicy="none">
              <main:TabTwo />
         </s:NavigatorContent>
         <s:NavigatorContent id="three" label="Three" creationPolicy="none">
              <main:TabThree/>
         </s:NavigatorContent>
    </mx:ViewStack>
    </mx:VBox>

  • LOADER COMPONENT ISSUE

    I noticed that when using the Loader component in Flash 8 -
    the following issue:
    I have a thumbnail gallery that uses the Loader component to
    view the large image.
    when i view the site in fire fox it works perfectly (scaling
    the content and centering it within the loader component shape)
    but when i view it in IE - no scaling or centering occurs!!??
    any idea why this is happening and how i can rectify
    it?

    Hey why don't you try adjusting the permissions and hot
    linking protection on your servers control panel? This might be the
    problem.

  • Could not retrieve Enterprise Global Template - Load balancer issue

    Hi,
    We have 4 Project Server 2010 servers. The 4 web servers are load balanced by networking team with sticky session configured.
    When we try to connect to the Project Server using MPP 2007 SP2, it fails saying 'Could not retrieve Enterprise Global template'. It works perfect when we point to a specific server by specifying the IP address for server name in the 'hosts'
    file.
    Earlier we observed some errors in the event viewer related to the SharePoint's internal load balancer for which restarted the 'Project Server Application' on each web server and it got fixed.
    Now, the only entries that we see related to load balancer are as mentioned below as Information (not errors).
    SharePoint Web Services Round Robin Service Load Balancer Event: Initialization
    Process Name: w3wp
    Process ID: 15080
    AppDomain Name: /LM/W3SVC/539065287/ROOT-1-130462463500778047
    AppDomain ID: 2
    Service Application Uri: urn:schemas-microsoft-com:sharepoint:service:ae7c7ee5c09b4e8198bdbb1ecb8c1c1b#authority=urn:uuid:9f626d347784423eb14bde4a1f4d13fc&authority=https://lonms12546:32844/Topology/topology.svc
    Active Endpoints: 4
    Failed Endpoints:0
    Endpoint List:
    http://lonxxx2532:32843/ae7c7ee5c09b4e8198bdbb1ecb8c1c1b/PSI
    http://lonxxx2545:32843/ae7c7ee5c09b4e8198bdbb1ecb8c1c1b/PSI
    http://lonxxx2546:32843/ae7c7ee5c09b4e8198bdbb1ecb8c1c1b/PSI
    http://lonxxx2566:32843/ae7c7ee5c09b4e8198bdbb1ecb8c1c1b/PSI
    Could the issue be due to network load balancer?
    Could the issue be due to Sticky session configuration on the load balancer.?
    How can we get to the root cause of the issue?
    Which logging category should we set to 'Verbose' that can give us some hint.
    Update: We tried to capture the requests through fiddler and observed that when fiddler is running on the client computer then the connection works perfectly fine even through the load balancer. Probably fiddler is reformatting the SOAP
    envelop of the web service requests the way it should before sending the request to the server.
    If we do not run fiddler and run some other similar tool (like Charles) then it again gives the issue and the request stucks at /PWA/_vti_bin/psi/winproj.asmx
    We ran Wireshark on the servers and found the following for that web service call:
    [TCP Previous segment not captured] Continuation or non-HTTP traffic.
    Please let me know if someone could provide any hint what can be done next.
    Regards, Amit Gupta

    There are several ways to configure your load balancer.   I would suggest that you work with the network engineer, the load balancer vendor and your project administrator to resolve this issue. 
    Basically you need URL to be resolved correctly.  Also, I don't believe PS2007 did a good job handling load balancing, so you may need to bring someone in good with IIS and see they can tweek IIS to manage the cache better.
    As I go back and look at your analysis, I think you should probably look at upgrading to Project Server 2013.  They made some improvement in load balancing and the management of distributive cache.
    I assume you have 4 WFE because you have thousands of project users.  Roughly how many  you have?  Over 1000, over 5000
    Have you tried to see if using two load balancing work?  How about just one front end.  I often see companies scaling SharePoint and Project server to extremes. 
    Michael Wharton, MVP, MBA, PMP, MCT, MCTS, MCSD, MCSE+I, MCDBA
    Website http://www.WhartonComputer.com
    Blog http://MyProjectExpert.com contains my field notes and SQL queries

  • Boot/Loading Screen Issue - suspecting MBR (MacBook Pro 11,3)

    Recap:
    Bought MacBook Pro 11, 3 at release - decided to install windows 7 64bit via BootCamp. Ended up with quite a few problems, decided to abandon task, went back to BookCamp to restore the partitions. Thought all was well, but I have one little issue.
    When I enter my password to pass through to the loading screen, my screen glitches for a second - basically loading the a smaller (1/4 size) image of my desktop, then flashes to full screen and continues to operate normally.
    Can anyone please tell me what's going on with my MBR reports?
    *** Report for internal hard disk ***
    Current GPT partition table:
    #      Start LBA      End LBA  Type
    1             40       409639  EFI System (FAT)
    2         409640   1952940543  Unknown
    3     1952940544   1954210079  Mac OS X Boot
    Current MBR partition table:
    # A    Start LBA      End LBA  Type
    1              1   1954210119  ee  EFI Protective
    MBR contents:
    Boot Code: None
    Partition at LBA 40:
    Boot Code: None (Non-system disk message)
    File System: FAT32
    Listed in GPT as partition 1, type EFI System (FAT)
    Partition at LBA 409640:
    Boot Code: None
    File System: Unknown
    Listed in GPT as partition 2, type Unknown
    Partition at LBA 1952940544:
    Boot Code: None
    File System: HFS Extended (HFS+)
    Listed in GPT as partition 3, type Mac OS X Boot
    gpt show: disk0: mediasize=1000555581440; sectorsize=512; blocks=1954210120
    gpt show: disk0: PMBR at sector 0
    gpt show: disk0: Pri GPT at sector 1
    gpt show: disk0: Sec GPT at sector 1954210119
           start        size  index  contents
               0           1         PMBR
               1           1         Pri GPT header
               2          32         Pri GPT table
              34           6        
              40      409600      1  GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
          409640  1952530904      2  GPT part - 53746F72-6167-11AA-AA11-00306543ECAC
      1952940544     1269536      3  GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC
      1954210080           7        
      1954210087          32         Sec GPT table
      1954210119           1         Sec GPT header
    k1sm3t:~ kismet$ sudo fdisk /dev/disk0
    Disk: /dev/disk0 geometry: 121643/255/63 [1954210120 sectors]
    Signature: 0xAA55
             Starting       Ending
    #: id  cyl  hd sec -  cyl  hd sec [     start -       size]
    1: EE 1023 254  63 - 1023 254  63 [         1 - 1954210119] <Unknown ID>
    2: 00    0   0   0 -    0   0   0 [         0 -          0] unused     
    3: 00    0   0   0 -    0   0   0 [         0 -          0] unused     
    4: 00    0   0   0 -    0   0   0 [         0 -          0] unused     
    k1sm3t:~ kismet$ diskutil list
    /dev/disk0
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:      GUID_partition_scheme                        *1.0 TB     disk0
       1:                        EFI EFI                     209.7 MB   disk0s1
       2:          Apple_CoreStorage                         999.7 GB   disk0s2
       3:                 Apple_Boot Recovery HD             650.0 MB   disk0s3
    /dev/disk1
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:                  Apple_HFS k1ll3r k1sm3t          *999.3 GB   disk1
                                     Logical Volume on disk0s2
                                     9058B7B4-9079-4B3C-990C-8E0088050F51
                                     Unlocked Encrypted
    /dev/disk2
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:     Apple_partition_scheme                        *21.0 MB    disk2
       1:        Apple_partition_map                         32.3 KB    disk2s1
       2:                  Apple_HFS rEFIt                   20.9 MB    disk2s2
    Thoughts? What can I do to restore the MBR?

    generally, when you hear the apple beep codes you wont be able to boot up as usual.  It may be an indication of ram that is ready to fail.  Bring it in to Apple to be on the safe side.

  • Cisco ACE20 Load balancing issues

    Dear All,
    I have a problem with the ACE 20 load balance
    To start with following is our architectural request flow:
    Load Balancer --> Webseal /(reverse proxy) --> HTTP Server --> Portal Server
    We have Hardware Load Balancer Cisco ACE20.
    When we access our portal from Webseal server it works totally fine without any issue, but when we access the same application using ACE we face the following issues:
    1) Some of the links on do not work. For eg: We have a link "subscribe" which points to https://intranet/abc/wps/portal/subscription , whenever we click on this link, the request is directed to https://intranet/abc/wps/portal i.e homepage
    2) URL redirection does not work We have some links which have a url forwarding or redirection for example when we open https://intranet/ef/quickplace it forwards the requests to https://intranet/ef/quickplace/Main.nsf?opendocument....., but this redirection fails and again the request is thrown to homepage i.e https://intranet/abc/wps/portal
    3) The response of the request and the overall portal when accessed via ACE is very sluggish and it takes 20 seconds for homepage to load, whereas the homepage loads in 4 secs when accessed via webseal.
    below is the ACE details. Kindly provide the your inputs to resolve this issue. will rate all the suggestions
    Hardware Product Number: ACE20-MOD-K9
      Card Index:     207
      Hardware Rev:   2.3
      Feature Bits:   0000 0002
      Slot No. :      7
      Type:           ACE
    Software
      loader:    Version 12.2[120]
      system:    Version A2(1.4) [build 3.0(0)A2(1.4) adbuild_11:54:12-2009/03/05_/a
    uto/adbu-rel2/rel_a2_1_4_throttle/REL_3_0_0_A2_1_4]
      system image file: [LCP] disk0:c6ace-t1k9-mz.A2_1_4.bin
      installed license: ACE-SEC-LIC-K9

    Dear all,
    Please suggest on this issue.
    BS

  • SQL Loader format issue

    Hello,
    I'm having an issue with the formatting of a date. My csv file, exported from another ora table, has the date in full date format ()
    I get the error:
    Record 12: Rejected - Error on table LOAN_VER_REQ_ARCH, column ORIGINATION_DATE.
    ORA-01843: not a valid monthand two other fields that are date fields get similar errors.
    Dates in .csv file appear as such:
    For example - 10/26/2001 0:00:00
    The question is in my ctl file, how do I format?
    I've tried to_char(ORIGINATION_DATE,'MM-DD-YYYY') or
    to_date(ORIGINATION_DATE,'MM-DD-YYYY') These dont' work. I don't recall doing any of this stuff in Ora 9i, but perhaps I wasn't loading date fields before in any tables I uploaded data into.
    Any tips would be appreciated. I get a message on the command prompt about:
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Commit point reached - logical record count 39
    Commit point reached - logical record count 78but that's incorrect because no records are going into the table. I'll worry about that part later though if I can get this date thing resolved.
    Thanks!

    user515689 wrote:
    Hello,
    I'm having an issue with the formatting of a date. My csv file, exported from another ora table, has the date in full date format ()
    I get the error:
    Record 12: Rejected - Error on table LOAN_VER_REQ_ARCH, column ORIGINATION_DATE.
    ORA-01843: not a valid monthand two other fields that are date fields get similar errors.
    Dates in .csv file appear as such:
    For example - 10/26/2001 0:00:00
    The question is in my ctl file, how do I format?
    I've tried to_char(ORIGINATION_DATE,'MM-DD-YYYY') or
    to_date(ORIGINATION_DATE,'MM-DD-YYYY') These dont' work. I don't recall doing any of this stuff in Ora 9i, but perhaps I wasn't loading date fields before in any tables I uploaded data into.
    Any tips would be appreciated. I get a message on the command prompt about:
    Copyright (c) 1982, 2005, Oracle.  All rights reserved.
    Commit point reached - logical record count 39
    Commit point reached - logical record count 78but that's incorrect because no records are going into the table. I'll worry about that part later though if I can get this date thing resolved.
    Thanks!Well, the sample string you've provided doesn't match the formats you've been attempting to use. Yours are separated with a hyphen whereas your sample data is separated by a slash.
    Is ALL the data in the same format?
    If so, perhaps this (based on your sample string)....
    TUBBY_TUBBZ?select to_date('10/26/2001 0:00:00', 'mm/dd/yyyy HH24:MI:SS') from dual;
    TO_DATE('10/26/20010
    26-OCT-2001 12 00:00
    1 row selected.
    Elapsed: 00:00:00.01As for specifying this in the control file
    http://download.oracle.com/docs/cd/B10501_01/server.920/a96652/ch09.htm#1011137
    (you mentioned Oracle 9, so i'm not sure if that's the version you are still on or not).
    Edited by: Tubby on Jul 27, 2010 11:50 AM
    Added link to documentation.

Maybe you are looking for

  • Error while adding Poduct to Technical System in SLD

    Hallo all . I am trying to add a product to a technical system and i get the following error . CIM_ERR_ALREADY_EXISTS: Instance already exists: SAP_InstalledProduct.CollectionID="39e669ae-4752-5274-31d1-d7d085055ac",ProductIdentifyingNumber="0",Produ

  • Data source and infoPackage needed

    Hi all, When I install some object in BI content. there are just the datasources for text have predefined infoPackage. the others data sources (for attr or hierarchy) don't have any predefined infoPackage. so do we need to create infopackage for each

  • Hp probook 4520s model web cam drivers are not instaling

    i am using windows 8.1 64bit os. i was download webcam drivers from hp support site, but that software are not instaling. please help me my email: [email protected]

  • Frameworks

    I am new to JEE and a little lost with frameworks - btw, I know some people swear by what they use, which has made my search a little more confusing. I am thinking of Spring+Hibernate+Struts 2 or EJB3+JPA and perhaps Struts 2 as well. I understand th

  • Printing a byte array

    Hi all, I have a string of hex characters say aeef1234569a Now I want to store this in a byte array as (0xae,0xef,0x12,0x34..} How can this be done. Many thanks. Ram