Improving response time

Here is my current situation. I am running MARS application version 4.2.1 I have a test environment of 1 - 2821 Router with IPS module version 12.4T, 1 - 2621 Router with net flow running version 12.2, 1 - 2950 OS Switch and a stand alone MARS Server. When I run the scenario to test the response time (the time it takes MARS to notify me of the attack) it takes any where from 1/2 hour to and 1 hour before the alarm show up in the event Dash board is there a way to improve the response time? Secondly, I see the IPS signatures in the syslog but I do not see them on my MARS application when investigating an attack. If I run a query for real time I see everything but can't investigate the attack.

David,
A few questions so we can help with your troubleshooting. Please forgive me for beginning with some MARS 101 questions.
1. How long have you had this test environment set up? Netflow usually needs a few days to baseline your network, even in a test lab.
2. What sort of scenario are you using to test the response time?
3. Do these bugs apply to your situration?
Take a look at:
CSCsc50636, CSCsc50652
Issues: Backend IPS process runs at 99% CPU when pulling large IP Logs
The backend IPS process reaches 1GB in memory used when pulling IP Logs. The process names depending on the version on MARS that is running:
In version 4.2.1 and earlier, the process names are pnids50_srv and pnids40_srv.
Go to the command line on the Mars box and do a sysstatus to see what system resources a process on your Mars device is using.
Hope this helps. Let us know more details when you have the chance.

Similar Messages

  • Any way to improve response time with iPhoto 8.1.2 using Mountain Lion ?

    Any way to improve response time with iPhoto 8.1.2 using Mountain Lion ?  Can you store photos on a separate hard drive and use a smaller file for openning iphoto?

    How did you move your iPhoto library to the new system?  the recommended way is Connect the two Macs together (network, firewire target mode, etc)  or use an external hard drive formed Mac OS extended (journaled) and drag the iPhoto library intact as a single entity from the old Mac to the pictures folder of the new Mac - launch iPhoto on the new mac and it will open the library and convert it as needed and you will be ready move forward.
    LN

  • How to improve response time of database

    We have 8.0.5 Oracle database with multiple users. that is running on NT. And several clients are accessing database trough application. The response time is really slow. How can I make access faster and where can I make my changes to take effect.
    Thank you.

    I tried several times to open/print The white paper on following address but always got an error something like 'There is an error on this document (14)':
    http://otn.oracle.com/products/reports/pdf/275641.pdf
    Can U please help to resolve this problem so that I can open this doc in Acrobat PDF viewer. I need this paper urgently as explained at the start of this question.
    Tariq

  • How to improve response time of queries?

    Although it looks that that this question relates to Reports but actually the problem is related to SQL.
    I have a Catalogue type report which retrieves data and prints. That is not much calculations involved.
    It uses six tables.
    The data model is such that I have a separate query for each table and then all
    these tables are liked thru link tool.
    Each table contains 3000 to 9000 rows but one table contains 35000 rows.
    The problem is that the report is taking too much time - about 3-1/2 hours while
    expectation is that it should take 20 to 40 minutes max.
    What can I do to reduce the time it takes to produce report.
    I mean should I modify data model to make a single query with equi-join?
    A)Specially I want to know what is traffic between client and server when
    1) we have multiple quieries and LINK tool is used
    2) Single query and equi-join is used
    B)Which activity is taking most of time ?
    1) Retrieving data from server to client
    2) Sorting according to groups (at client) and formating data and saving in file
    Pl. guide.
    Every body is requested to contribute as per his/her experience and knowledge.
    M Tariq

    Generally speaking, your server is faster than your PC (if it is not, then you have bigger problems than a slow query), let the server do as much of the work as possible, particularly things like sorting, and grouping. Any calculations that can be pushed off onto the server (e.g. aggregate functions, cola + colb etc.) should be.
    A single query will always be faster than multiple queries. Let the server do the linking.
    The more rows you return from the server to your PC, the more bytes the network and your PC have to deal with. Network traffic is "expensive", so get the result set as small as possible on the server before sending it back over the network. PC's generally have little RAM and slow disks compared to servers. Large datasets cause swapping on the PC, this is really expensive.
    Unless you are running on a terribly underpowered server, I think even 30 - 40 minutes would be slow for the situation you describe.

  • How to improve response time of report?

    I have a Catalogue type report which retrieves data and prints. That is not much calculations involved.
    It uses six tables.
    The data model is such that I have a separate query for each table and then all
    these tables are liked thru link tool.
    Each table contains 3000 to 9000 rows but one table contains 35000 rows.
    The problem is that the report is taking too much time - about 3-1/2 hours while
    expectation is that it should take 20 to 40 minutes max.
    What can I do to reduce the time it takes to produce report.
    I mean should I modify data model to make a single query with equi-join?
    A)Specially I want to know what is traffic between client and server when
    1) we have multiple quieries and LINK tool is used
    2) Single query and equi-join is used
    B)Which activity is taking most of time ?
    1) Retrieving data from server to client
    2) Sorting according to groups (at client) and formating data and saving in file
    Pl. guide.
    Every body is requested to contribute as per his/her experience and knowledge.
    M Tariq

    I tried several times to open/print The white paper on following address but always got an error something like 'There is an error on this document (14)':
    http://otn.oracle.com/products/reports/pdf/275641.pdf
    Can U please help to resolve this problem so that I can open this doc in Acrobat PDF viewer. I need this paper urgently as explained at the start of this question.
    Tariq

  • Improve response time ...

    Hi,
    From a client which calls an BMP entity Bean to get all the records from a
    table. This takes a few seconds even though there are only less than 10
    records. Note that the context has already been initialized before we get to
    the screen which extracts the data (the first attempt to optimize). I use
    ejbFindAll which returns an enumeration.
    What approach whould I adopt to improve performance ? I'm worried when my
    screen will have hundreds of records to retrieve !
    Thanks.

    It doesn't seem to make any improvement to use Weblogic's connection pooling
    mechanism. Shouldn't be better to rather another approach whcihc would
    consist in intorducing a session bean in between the client and the entity
    bean which would collect all the records and send them at once ?
    Guy Tal <[email protected]> wrote in message
    news:86l7d5$eml$[email protected]..
    Does your BMP bean create a new connection or are you using a connectionpool?
    If you're creating a new connection, it will slow you down.
    Guy
    Guy Perron <[email protected]> wrote:
    Hi,
    From a client which calls an BMP entity Bean to get all the records from
    a
    table. This takes a few seconds even though there are only less than 10
    records. Note that the context has already been initialized before weget to
    the screen which extracts the data (the first attempt to optimize). Iuse
    ejbFindAll which returns an enumeration.
    What approach whould I adopt to improve performance ? I'm worried when
    my
    screen will have hundreds of records to retrieve !
    Thanks.

  • Improve the response time of logical database

    hi all,
    to improve the response time( time to access data ) in logical database how can we achive this

    ...but the same code is having a good response time in production environment bu tis slow in development....sure...your server might be doing a lot of time consuming things when run in development mode (e.g to enable debugging, extra monitoring or tracing etc.)..Do you have equal machines on both environments? Heavy tracing to logfiles in development environment is one possible bottleneck...or e.g. the amount of memory the db server is allowed to use ( table joins in the memory vs. using temporary files)

  • How can I improve the response time of the user interface?

    I'm after some tips on how to improve the response time to mouse clicks on a VI front panel.
    I have  data acquistion application which used to run fine, but after spending a couple of weeks making a whole bunch of changes to it I find that the user interface has become a bit sluggish.
    My main GUI VI has a while loop running 16 times a second, updating some waveform charts and polling about a dozen buttons on the front panel.
    There is sometimes a delay (variable, but up to 2 seconds sometimes) from when I click on a button to when it becomes depressed. I have wired the iteration terminal of the while loop to an indicator on the front panel and can see that the while loop is ticking over during the delayed response to the mouse click, so I know that the problem is not that the whole program is running slow, just the response to mouse clicks.
    Also, just for debugging purposes, I have indicators of the iterations of all the main while loops in my program on the front panel, so I can see that there are no loops running abnormally fast either.
    One thing I've tried is to turn off multi-threading, and this does seem to work - the response to mouse clicks is much faster. However, it has the side effect of making the main GUI while loop run less evenly. I was trying to get a fairly smooth waveform scrolling across the screen, and when multi-threading is off it gets a bit jerky.
    Any other suggestion welcome..
    (I am using LabVIEW 7.1, Windows 2000).
    Regards,
    Mark.

    Hi Altenbach,
    Thanks for your reply. In answer to your questions:
    I am doing both DAQ board and serial data acquisition. I am using NIDAQ traditional for the DAQ board, and VISA for the serial. I have other similar versions of this program that do only DAQ board, or only serial, and these work fine. It was only when I combined them both into the same program that I ran into problems.
    The multiple while loops are actually in separate VIs. I have one VI that acquires data from the DAQ card, another VI that acquires data from the serial port, another VI that processes the data and saves to file, and another VI, the GUI VI, that displays the data in graphs and charts.  The data is transferred betwen the VIs via LV2 globals.
    The GUI VI is a bit more complicated than I first mentioned. It has tab control, with 4 waveform charts on one page, 4 waveform graphs on another page, and 3 waveform graphs on another page. The charts have a history length of 2560, and 16 data points are added 16 times a second. The wavefom graphs are only updated once per minute.
    I don't use the value property at all, but I do use lots of property nodes for changing the properties of the graphs and charts e.g. changing plot colours, Y scale range etc. There is only one local variable (for the Tab control). All the graphs and charts have data wired directly to their terminals.
    I haven't done any profiling yet.
    I am building arrays in uninitialised shift registers, but this is all well under control. As the experiement goes on, more data is collected and stored, and so the memory usage does gradually increase, but only to the extent that I would expect.
    The CPU usage is 100%, but I thought this was always the case when using NIDAQ  with DAQ cards. Or am I wrong about this? (As a side note, I am using NIDAQ traditional, but would NIDAQmx be better?)
    Execution priority of the GUI vi (and all the other VIs for that matter) is set to normal.
    The program is a bit large to post here, and I'm not sure if my company would be happy for me to publicise it anyway, so I suspect that this is turning into one of those questions that are going to be impossible to answer.
    Just as a more general question, why would turning off multi-threading improve the user interface response?
    Thanks,
    Mark.

  • Webservice response times --How can we improve ?

    Hi All,
    I am making two different calls to a Function module from java
    1. web service   2. Jco
    When I go to STAD transaction i can see webservice response timings are more compared to Jco.
    What intresting here for is CPU timings and DB timings.
    Some case The DB timings for webservice 3 to 4 times more than Jco  .
    Ideally DB timings should be similar in both the cases..right ??
    CPU timngs also more in webservice ? Why ? How we optimze this for good performance of web service ??
    My webservice is simple one conatins 4 input parmaters (simple type)  retuening a simple structure.
    Jco response time is around 500-2000 (ms)
    web service response time is  2000-5000  (ms)
    Looking for expert suggestions from our community
    Thanks in advance.
    Best Regards,Anilkumar

    hi,
    JCo is an native binary RFC oder FastRFC, walks through the Gateway,
    Webservice are textoriented, more Overhead and in summary with still less performance,
    walks through the ICM
    e.g. RFC-Connections are ca. 10 times faster than Webservices!

  • How to improve server response time...

    we have two solaris5.8 machines with 4G of Ram in each of the them. So they are pretty big. We have round-robine load balancing... We are using loadRunner to monitor the performance of our applicaion.So far our applicaion has not even pasted 50 users load test. If we are performing the load test the site is not usable because it gets extremely slow. So we thought by
    increasing the request thread (in both kxs & kjs)and connection pool size, we might increase the performance but it did just the oppsite. AS the number of user increases the response time inscreases as well...Our application framework is based on MVC. What is the best setting for application server for the large number of users....Currently in production our old application supports 200 users at a time(using loadRunner)
    we are using oracle 8i database....
    any help will be appreciated!!
    Thanks

    Hi,
    we are having problems with server response on Solaris although this is with
    an Informix database and we do not have load balancing set up. Our server
    has 3GB of RAM. However the same app ran fine on an NT iPlanet installation
    where the server only had 512MB RAM.
    This has occasioned a lot of hard work trying to pin down the problem. We
    are currently investigating the JDKs. JDK 1.3 almost completely eliminates
    the problem we had. Using the java profiler we noticed that in JDK 1.2.2
    most of the CPU time seemed to be spent in the socket read and socket accept
    methods. I've attached the profile output for JDK1.3 and 1.2.2, there is
    certainly a marked difference.
    I would be really interested to see if you get similar results. I simply
    added -Xrunhprof:cpu=samples to the JAVA_ARGS in iasenv.ksh.
    Andy
    "Mansoor Quraishi" <[email protected]> wrote in message
    news:[email protected]..
    we have two solaris5.8 machines with 4G of Ram in each of the them. So
    they are pretty big. We have round-robine load balancing... We are
    using loadRunner to monitor the performance of our applicaion.So far
    our applicaion has not even pasted 50 users load test. If we are
    performing the load test the site is not usable because it gets
    extremely slow. So we thought by
    increasing the request thread (in both kxs & kjs)and connection pool
    size, we might increase the performance but it did just the oppsite.
    AS the number of user increases the response time inscreases as
    well...Our application framework is based on MVC. What is the best
    setting for application server for the large number of
    users....Currently in production our old application supports 200
    users at a time(using loadRunner)
    we are using oracle 8i database....
    any help will be appreciated!!
    Thanks
    Try our New Web Based Forum at http://softwareforum.sun.com
    Includes Access to our Product Knowledge Base![Attachment profile.log.jdk12, see below]
    [Attachment profile.log.jdk13, see below]

  • Help required in optimizing the query response time

    Hi,
    I am working on a application which uses a jdbc thin client. My requirement is to select all the table rows in one table and use the column values to select data in another table in another database.
    The first table can have maximum of 6 million rows but the second table rows will be around 9000.
    My first query is returning within 30-40 milliseconds when the table is having 200000 rows. But when I am iterating the result set and query the second table the query is taking around 4 millisecond for each query.
    the second query selection criteria is to find the value in the range .
    for example my_table ( varchar2 column1, varchar2 start_range, varchar2 end_range);
    My first query returns a result which then will be used to select using the following query
    select column1 from my_table where start_range < my_value and end_range> my_value;
    I have created an index on start_range and end_range. this query is taking around 4 millisseconds which I think is too much.
    I am using a preparedStatement for the second query loop.
    Can some one suggest me how I can improve the query response time?
    Regards,
    Shyam

    Try the code below.
    Pre-requistee: you should know how to pass ARRAY objects to oracle and receive resultsets from java. There are 1000s of samples available on net.
    I have written a sample db code for the same interraction.
    Procedure get_list takes a array input from java and returns the record set back to java. You can change the tablenames and the creteria.
    Good luck.
    DROP TYPE idlist;
    CREATE OR REPLACE TYPE idlist AS TABLE OF NUMBER;
    CREATE OR REPLACE PACKAGE mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor);
    END mypkg1;
    CREATE OR REPLACE PACKAGE BODY mypkg1
    AS
       PROCEDURE get_list (myval_list idlist, orefcur OUT sys_refcursor)
       AS
          ctr   NUMBER;
       BEGIN
          DBMS_OUTPUT.put_line (myval_list.COUNT);
          FOR x IN (SELECT object_name, object_id, myvalue
                      FROM user_objects a,
                           (SELECT myval_list (ROWNUM + 1) myvalue
                              FROM TABLE (myval_list)) b
                     WHERE a.object_id < b.myvalue)
          LOOP
             DBMS_OUTPUT.put_line (   x.object_name
                                   || ' - '
                                   || x.object_id
                                   || ' - '
                                   || x.myvalue
          END LOOP;
       END;
    END mypkg1;
    [pre]
    Testing the code above. Make sure dbms output is ON.
    [pre]
    DECLARE
       a      idlist;
       refc   sys_refcursor;
       c number;
    BEGIN
       SELECT x.nu
       BULK COLLECT INTO a
         FROM (SELECT 5000 nu
                 FROM DUAL) x;
       mypkg1.get_list (a, refc);
    END;
    [pre]
    Vishal V.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Coherence and EclipseLink - JTA Transaction Manager - slow response times

    A colleague and I are updating a transactional web service to use Coherence as an underlying L2 cache. The application has the following characteristics:
    Java 1.7
    Using Spring Framework 4.0.5
    EclipseLink 12.1.2
    TopLink grid 12.1.2
    Coherence 12.1.2
    javax.persistence 12.1.2
    The application is split, with a GAR in a WebLogic environment and the actual web service application deployed into IBM WebSphere 8.5.
    When we execute a GET from the server for a decently sized piece of data, the response time is roughly 20-25 seconds. From looking into DynaTrace, it appears that we're hitting a brick wall at the "calculateChanges" method within EclipseLink. Looking further, we appear to be having issues with the transaction manager but we're not sure what. If we have a local resource transaction manager, the response time is roughly 500 milliseconds for the exact same request. When the JTA transaction manager is involved, it's 20-25 seconds.
    Is there a recommendation on how to configure the transaction manager when incorporating Coherence into a web service application of this type?

    Hi Volker/Markus,
    Thanks a lot for the response.
    Yeah Volker, you are absolutely right. the 10-12 seconds happens when we have not used the transaction for several minutes...Looks like the transactions are moved away from the SAP buffer or something, in a very short time.
    and yes, the ABAP WP's are running in Pool 2 (*BASE) and the the JAVA server, I have set up in another memory pool of 7 GB's.
    I would say the performance of the JAVA part is much better than the ABAP part.
    Should I just remove the ABAP part of the SOLMAN from memory pool 2 and assign the JAVA/ABAP a separate huge memory pool  of say like 12-13 GB's.
    Will that likely to improve my performance??
    No, I have not deactivated RSDB_TDB in TCOLL from daily twice to weekly once on all systems on this box. It is running daily twice right now.
    Should I change it to weekly once on all the systems on this box?  How is that going to help me?? The only thinng I can think of is that it will save me some CPU utilization, as considerable CPU resources are needed for this program to run.
    But my CPU utilization is anyway only like 30 % average. Its a i570 hardware and right now running 5 CPU's.
    So you still think I should deactivate this job from daily twice to weekly once on all systems on this box??
    Markus, Did you open up any messages with SAP on this issue.?
    I remember working on the 3.2 version of soultion manager on change management and the response times very much better than this as compared to 4.0.
    Let me know guys and once again..thanks a lot for your help and valuable input.
    Abhi

  • Spatial Query Response Time

    O/S - Sun Solaris
    ver - Oracle 8.1.7
    I am trying to improve the response time of the following query. Both tables contain polygons.
    select a.data_id, a.GEOLOC from information_data a, shape_data b where a.info_id = 2 and b.shape_id = 271 and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE'
    The response time with info_id not indexed is 9 seconds. When I index info_id, I get the following error. Why is indexing info_id causing a spatial index error ? Also, other than manipulating the tiling level, is there anything else that could improve the response time ?
    ERROR at line 1:
    ORA-29902: error in executing ODCIIndexStart() routine
    ORA-13208: internal error while evaluating [window SRID does not match layer
    SRID] operator
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 84
    ORA-06512: at line 1
    Thanks,
    Ravi.

    Hello Ravi,
    Both layers should have SDO_SRID values set in order for the index to work properly.
    After you do that you might want to add an Oracle hint to the query:
    select /*+ ordered */ a.data_id, a.GEOLOC
    from shape_data b, information_data a
    where a.info_id = 2 and b.shape_id = 271
    and sdo_filter(a.GEOLOC,b.GEOLOC,'querytype=window')='TRUE' ;
    Hope this helps,
    Dan
    Also, if only one or very few rows have a.info_id=2 then the function sdo_geom.relate
    might also work quickly.

  • How to obtain the Query Response Time of a query?

    Given the Average Length of Row of tables and the number of rows in each table,
    is there a way we get the query response time of a query involving
    those tables. Query includes joins as well.
    For example, suppose there 3 tables t1, t2, t3. I wish to obtain the
    time it takes for the following query:
    Query
    SELECT t1.col1, t2.col2
    FROM t1, t2, t3
    WHERE t1.col1 = t2.col2
    AND t1.col2 IN ('a', 'c', 'd')
    AND t2.col1 = t3.col2
    AND t2.col1 = t1.col1 (+)
    ORDER BY t1.col1
    Given are:
    Average Row Length of t1 = 200 bytes
    Average Row Length of t2 = 100 bytes
    Average Row Length of t3 = 500 bytes
    No of rows in t1 = 100
    No of rows in t2 = 1000
    No of rows in t3 = 500
    What is required is the 'query response time' for the said query.

    I do not know how to do it myself. But if you are running Oracle 10g, I believe that there is a new tool called: SQL Tuning Advisor which might be able to help.
    Here are some links I found doing a google search, and it looks like it might meet your needs and even give you more information on how to improve your code.
    http://www.databasejournal.com/features/oracle/article.php/3492521
    http://www.databasejournal.com/features/oracle/article.php/3387011
    http://www.oracle.com/technology/obe/obe10gdb/manage/perflab/perflab.htm
    http://www.oracle.com/technology/pub/articles/10gdba/week18_10gdba.html
    http://www.oracle-base.com/articles/10g/AutomaticSQLTuning10g.php
    Have fun reading:
    You can get help from teachers, but you are going to have to learn a lot by yourself, sitting alone in a room ....Dr. Seuss
    Regards
    Tim

  • Significant difference in response times for same query running on Windows client vs database server

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    Ideally I would like to get a response time equivalent on the Windows client to what I get when running this on the database server.
    In both cases the query plans are the same.
    The query and plan is shown below :
    {code}
    SQL> explain plan
      2  set statement_id = 'SLOW'
      3  for
      4  SELECT DISTINCT /*+ FIRST_ROWS(503) */ objecttype.id_object
      5  FROM documents objecttype WHERE objecttype.id_type_definition = 'duotA9'
      6  ;
    Explained.
    SQL> select * from table(dbms_xplan.display('PLAN_TABLE','SLOW','TYPICAL'));
    PLAN_TABLE_OUTPUT
    | Id  | Operation          | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)|
    |   0 | SELECT STATEMENT   |           |  2852K|    46M|       | 69851   (1)|
    |   1 |  HASH UNIQUE       |           |  2852K|    46M|   153M| 69851   (1)|
    |*  2 |   TABLE ACCESS FULL| DOCUMENTS |  2852K|    46M|       | 54063   (1)|
    {code}
    Are there are configuration changes that can be done on the Oracle client or database to improve the response times for the query when it is running from the client?
    The version on the database server is 10.2.0.1.0
    The version of the oracle client is also 10.2.0.1.0
    I am happy to provide any further information if required.
    Thank you in advance.

    I have a query which is taking a long time to return the results using the Oracle client.
    When I run this query on our database server (Unix/Solaris) it completes in 80 seconds.
    When I run the same query on a Windows client it completes in 47 minutes.
    There are NO queries that 'run' on a client. Queries ALWAYS run within the database server.
    A client can choose when to FETCH query results. In sql developer (or toad) I can choose to get 10 rows at a time. Until I choose to get the next set of 10 rows NO rows will be returned from the server to the client; That query might NEVER complete.
    You may get the same results depending on the client you are using. Post your question in a forum for whatever client you are using.

Maybe you are looking for

  • Read Fixed length file

    I have a text file that has data in specific positions of the file. ie. Pos 1-20 Name Pos 21-30 Division Pos 31-38 Birthdate I saved the file in the DB and want to process it to read the data in the specific positions I listed above. I started with t

  • Reg Slow Database

    Hi, Our Database Suddenly become slow in the recent days. The Same concurrent report which completes in seconds takes hours to complete. We gone through the awr report and found Top 5 Timed Events Event               Waits          Time(s)          A

  • Can i take expdp from remote server

    we have three databases ( 10.2.0.1 / OEL), I configured seperate server (OEL/10G CLIENT ADMIN PART) for taking backup for all 3 databases( is it possible to run datapump from remote system and write the dump file in the remote system (backup server),

  • My built in isight isn't working with imovie!!

    hello, i have the black macbook, and i was trying to use the built-in isight on imovie although it does allow me to do so, in fact it's not even found, when i search for the option, please reply with help it will be greatly appreciated

  • How do i delete items from my dashboard?

    can anyone help me - i'm trying to delete and/or add items to my dashboard. thanks!