Outage in N. Bethesda - or maybe further?

My tv just went off too. NO stations what so ever. I'm in Clifton NJ. Verizon really aught to have an alert area on their home page to check for info on outages!!!

You can check for outages in your area using the STB.  There is an item "network diagnostics" under In-Home Agent in the Customer Support menu area.

Similar Messages

  • DSL Internet outage in N. Bethesda - or maybe further?

    DSL has been out for almost 24 hours. Verizon told me 12 hours ago that they would have it fixed in 4 to 6. Interestingly Verizon no longer have a system status page, so no longer are users able to know what's going on. But I do know that yesterday they had a big outage in PA as well. Totally unacceptable - in the age of Twitter, Facebook and text that we are left cutoff with no explanation.

    You can check for outages in your area using the STB.  There is an item "network diagnostics" under In-Home Agent in the Customer Support menu area.

  • Missing Outages Settings in Privileges

    Hello,
    As I am working through configuring Service Desk for evaluation, I can't help but come across little issues that I'm unable to find answers for in the documentation or on the support site. With no maintenance for support on the product as we're just evaluating, I turn to the forums for your help.
    In this case, logged in as a user in the supervisors group or with the built-in admin account, I am looking over the privileges and I can not seem to find where the Outages options went. Under the User tab, there is no "Create Outage" option. Under the System tab, there are no "Outage Options" to configure.
    Maybe it is not displaying as a result of some other setting I've configured, but I'm not certain. After re-reading the docs again, I do see a note about "All Outages options apply to the Service Manager only". I'm not sure what they are referring to. Is this why I don't see those options as we're not running the fully licensed/capable version of Service Desk?
    Ideas?
    System: Appliance v7.0.2 / Incident Management version.
    Thanks!
    LK

    The reason for those settings (outages) that are missing seems to be related to the two technician license we received through our open workgroup suite/zenworks maintenance, which only allows for the Incident Management portion of the software. I had seen the outages settings in the privileges before I added our two technician license. Once that was added, a number of settings/options/features were removed of course, although I wish the outages could have stayed in the basic version of Service Desk. Seems like a feature that should be there even for the Incident Management version too.

  • No requisition found; verify your entries in SRM 7.0

    HI Udo,
    i am using SRM 700 on level 005. I want to use a RFx-Szenario, for that i create a Purchase Requisition in ECC via ME51N, then i release it via ME54N. After that it will be send via ERPSourcingReuest Webservice, over XI to the SRM System. In the SRM System the XML Message is processed and has the status "Transfer to external status" with a green bolt icon. I don't see an error Message in tcode /SAPPO/PPO2 So this looks fine to me?
    So I go into the SRM Portal Website, Purchasing --> Sourcing --> Carry Out Sourcing... there i enter my Purchase Requisition Number, which i have in the ECC into the Field "External Requirement", then i press "Search" and get the Warning Message
    "No requisition found; verify your entries". No Purchase Requisitions are displayed - it is also nothing disblayed when i enter * or leaving everything blank.
    What am i doing wrong, what is missing? Where can i see maybe further errors, if there is no message in /SAPPO/PPO2? In which tables should the data be written out of the XML Message?
    Thank you in Advance for your help
    Best Regards
    Udo

    Hi all,
    the problem was that the relation between plant and product was somehow mixed up. We created it new and now we can process the mesasges.
    But i have one more questions, which Monitoring Transactions i have for SRM (Abap/Portal)? When I was in Debug Mode i saw error Messages like:
    A     /SAPSRM/CH_SOA_MSG     84     Purchasing group for follow-on document missing
    A     BBP_PD     245     Purchasing group for follow-on document missing
    A     BBP_PD     246     Purchasing organization for follow-on document missing
    A     BBP_PD     249     Document type for determined backend system missing
    E     BBP_PC     13     No default value is defined
    E     BBP_PC     22     No tax data maintained
    E     BBP_PD     276     Back-end data could not be read
    E     BBP_PD     358     Select a location that is assigned to the plant chosen
    E     BBP_TAX     1     Tax code could not be determined
    E     BBP_TAX     13     Not possible to calculate tax
    But i only saw that in the monitoring? These Errors can happy every time and i need to see them.... can you please tell me, where i can have a look?

  • Issue with corrupted document, need Help !

    Dear all,
    In our company we use daily Indesign 7.5 (cs 5.5 package) and when we export the .ind document to .pdf the system crash.
    Above this the original document (.ind) is corrupted and we're not able to open thedocument.
    We loosed a lot of work and we're not able to restore de document.
    When we try to open the corrupted document this message appear :
    Can somebody help me?
    Thx and have a nice day.

    Thank you, yes, we at Markzware can try to help with your corrupt InDesign documents, for sure.
    From as far as we can see when dealing with the odd corrupt InDesign file from time to time, is that it comes down to three main reasons why these InDesign documents have issues:
    Font(s)
    Image(s)
    PowerOutage or Forced Stopped
    Fonts can indeed be a corrupt font itself, but also corrupt, over-loaded or otherwise odd-balled font caches. Here is a video on how to clear font caches in InDesign and although for a Mac in this video, I think you'll get the idea on how to do that on a PC as well:
    One highly corrupted image can ruin the show for your entire file. Check all images individually till you find the bad one and remove that. Also, more times then I can count, InDesign users drag-and-drop images from a web browser while visiting a web page, right into an InDesign layout. That is a big no-no. Not saying your are doing that, but just covering what can happen in the images category and a bad InDesign file.
    As for the odd power outage, well that is something maybe Tesla could have helped us with, but until a time when we are not dependant on AC/DC currents, it will remain a sore point. Power in your cases is likely not the issue though.
    Always report issues like these to Adobe though. They can also often help and if not right away, at least they may gather important details to help stop that type of corruption in the future.
    If all else fails and as mentioned, we as Markzware can help try and fix your files - Bad InDesign or Quark File Recovery Service
    Friendly Regards,
    David
    Markzware

  • Slow processing when Parsing  XML files

    Hi
    I have written a utility in Java (using JAXP) which parses an .xml file. This xml file
    contains references to n number of other xml files. From here I direct the parser to each of these individual xml files where it searches for elements with a particular tag name, lets say 'node'. As in,
    <node name= "Parent A">
    <entry key="type" value="car"/>....
    </node>
    <node name= "Parent B">
    <entry key="type" value="Truck"/>
    </node>
    I am collecting all the 'nodes' from these n xml files and then finally building a single xml file which will contain only the 'node' and its attribute 'name' value. As in,
    <node name="Parent A"/>
    <node name="Parent B"/>
    In most cases n is a number greater than 100. Each of these n xml file have LOC exceeding 2 to 3000.
    NOW the issue is, the xml parser takes more than an hr to go through just 10 - 15 xml files , collects the 'node', and building a new DOM object which I finally get published using Xml Transformer.
    In effect , it beats the whole purpose of writing this utility as no programmer will stick around for an hr to to watch it happen/finish.
    Apart from maybe further fine tuning my logic, which I've almost exhusted, is there any 'faster' way of doing the whole thing.
    Please reply. Any comment would be greatly appreciated.
    I am using JAXP 1.3 specs to parse and build the DOM.

    DOM is slow.
    Do it yourself using a simple SAX-parser. For all startElement - check if it is a "node", and then write your output!
    Xerces is faster than the built in Crimoson SAX-parser in Java.
    Parsing with Xerces a 2GHz machine takes manages about 5-6 MB/s of XML files.
    Or use STX with Joost - although it's not THAT much faster
    http://www.xml.com/pub/a/2003/02/26/stx.html
    http://joost.sourceforge.net/
    Gil

  • Photo Management for old (import) & new pictures

    Below is a proposal for organising my photo management system.
    I'm a hobbiest photographer with over 30 years of pictures to file. I have read extensively, including this forum and sort best practice to developed a plan to do achieve an organised scalable system.
    I would appreciate sharing of experience by providing comments on my suggested plan of action and questions I raise below.
    My Objective:
    Organise and manage existing legacy photos and provide a system for future photos that is not tied into specific system or vendor. For example where possible Metadata free from vendor specific format or database.
    Background:
    My collection consists of many photos in over 50 folders, these are:
    a) Scanned picture that are now JPGs,
    b) Shot JPG,
    c) Raw (new and existing).
    I files my pictures on my PC &amp; external back up copy.
    I file: /my pictures/<YYYY>_<MM>_<Location_Event>  for example: /my pictures/2013_04_Yorkshire_Family_Weekend.
    On some of my digital pictures I have included the Metadata using Google Picasa v3. Most of my scanned pictures have no Metadata as yet.
    I own and aim to use Lightroom 4 to do Post Production and photo filing and management whilst also continuing to use Google Picasa for viewing &amp; Face Recognition (at least until I feel fully confident in Lr).
    Suggested plan of attack:
    Stage 1)
    Add "My Picture" folder, including sub folders to Lightroom Library, Standard View.
    Stage 2)
    Continue with /my pictures/<YYYY>_<MM>_<Location_Event> 
    For example: /my pictures/2013_04_Yorkshire_Family_Weekend.
    Stage 3)
    Use Lr to apply to both existing and new photos file name format as follows: <My Initials>_<YYYYMMDD>_<Key_Words Inc_Location_Event_Description>_<sequence xxxx>
    For example:RB_20130420_Peru_Cusco_Holiday_Shots_0001
    NB: Regarding Version Control for RAW I will use Lr inc use of "Virtual Copies", for Jpeg I will use additional field for Developed JPEGs of <vX>
    For example:RB_20130420_Peru_Cusco_Holiday_Shots_0001_v01.jpg
    Stage 4)
    Manually use Lr to add Metadata &amp; Ratings to all pictures to date.
    Metatdata: Include "My Name" to all pictures, "Date" by Windows folder, "Keyword" by folder, "Additional Keyword" to 3* and above.
    Rating: 1&amp;2* no action required, 3*+ maybe further editing if time permits.
    Stage 5)
    Run "Save Metadata to file" in Lr. This will run Metadata to file for JPG and to XMP side car for RAW files.
    Stage 6)
    Import new shots to new folder using "/my pictures/<year><month number><event>" structure. As they are in sub folder they will be included automatically to Lr library.
    Stage 7)
    Create JPG in same folders as RAWs for Developed Photos.
    Stage 8)
    Back up "My Photos" and Lr dbase on regular basis as per Lr best practice.
    Questions:
    1) Is Lr more powerful than Google Picasa for Renaming File &amp; Adding Metadata? 
    2) Once I "Save Meta Data to Photo" in Lr will I be able to access it in Google Picasa?
    3) Can I automate the Importing of new shots to Lr and have it maintain / build upon my existing file structure?
    4) Any recommendations for Keyword Schema to include in Metadata?
    5) Does my approach above seem sensible?

    Honestly, if your goal is to have metadata that is not tied into a specific system or vendor, then here's what I would do:
    I would not put any effort at all into organizing your folders (or improving the organization of the folders), or renaming files, as I think your time is better spent on keywords. In my opinion, the benefits of keyword organizing are many; in addition, without you putting any effort into reorganizing your folders, you can also search for you photos by date inside Lightroom using Lightroom tools regardless of the folder structure (and you can search this way inside of any other photographic management software I have used as well, regardless of the folder structure).
    In Lightroom assign whatever keywords and metadata are appropriate, and then instruct Lightroom to write the metadata to the files. There are two ways to do this, one is Ctrl-S, and the second is to turn on the option in Preferences that does this automatically (Edit->Catalog Settings->Metadata) Once the metadata has been written by Lightroom, you now have metadata that can be read by any photographic software (and even all major operating systems) that I know of.  Having keywords in the file name is redundant, it doesn't help you find photos and faster but it does require extra work; and there is a limit to the number of characters you can have in a path\filename.
    Another point ... don't use Lightroom collections or pick flags as part of this "metadata that is system and vendor independent", as Lightroom does not write this information to the files.
    You ask for Keyword schema, but without knowing what your photo's subject matter is, there's no way I could give any advice.

  • A somewhat more complex use of EXECUTE_QUERY with a Where clause...

    Basically, what I need to be able to do is when the form receives a certain parameter upon opening; it must immediately run a certain query and populate the data blocks.
    I do understand the concept of setting the DEFAULT_WHERE in the PRE_QUERY trigger for the block; but I don't think this will work in my situation.
    The reason being is that each data block must run 2 separate queries; and BOTH resulting queries must be populated to each data block, and not overwrite eachother...
    Not only that, but the query being executed requires me to reference 2 separate tables (the data coming from one, however its comparing results from 2 separate tables, so I don't think simply modifying the where clause for the data block will even be possible to achieve the results I am looking for.
    Maybe further explanation of my queries may help the situation. Basically, each data block is linked to a table for the current logged in user, and then there is a production table as well. When ANYTHING is modified, added, or deleted on the current user's table, they are suppose to "publish" the record to the production table. The EXECUTE_QUERY statement will be responsible for running the query that will populate ALL the records that have yet to be published to the production table. So basically, its running a query that will use an ID and compare each individual field by that ID for any changes from the production table. Then, it runs another query to find records that are in either the production table or user table, and not the other; to flag a new record or deleted record.
    I've been thinking about possible ways to do this, but have had no luck unfortunately;
    ANY guidance will be greatly appreciated it. I do understand that my description of the problem may be hard to comprehend, so if you need further clarification please ask.
    Jason
    Message was edited by:
    user558647

    It would be helpful to give an example of the sort of data in each table and what you want to show in each block. I probably didn't understand most of what you wrote but I think the following may be analogous to your situation and requirements (but you'll have more columns of course):
    SQL> CREATE TABLE t_user AS
      2    SELECT
      3      ROWNUM id,
      4      object_name col1,
      5      object_type col2
      6    FROM all_objects
      7    WHERE ROWNUM < 10;
    Table created.
    SQL>
    SQL> DELETE FROM t_user WHERE id = 1;
    1 row deleted.
    SQL>
    SQL> CREATE TABLE t_published AS
      2    SELECT
      3      ROWNUM id,
      4      CASE Mod(ROWNUM,2)
      5        WHEN 0 THEN SubStr(object_name,1,2)
      6        ELSE object_name
      7        END AS col1,
      8      object_type col2
      9    FROM all_objects
    10    WHERE ROWNUM < 9;
    Table created.
    SQL>
    SQL>
    SQL> SELECT * FROM t_user;
            ID COL1                           COL2
             2 I_USER1                        INDEX
             3 CON$                           TABLE
             4 UNDO$                          TABLE
             5 C_COBJ#                        CLUSTER
             6 I_OBJ#                         INDEX
             7 PROXY_ROLE_DATA$               TABLE
             8 I_IND1                         INDEX
             9 I_CDEF2                        INDEX
    8 rows selected.
    SQL> SELECT * FROM t_published;
            ID COL1                           COL2
             1 ICOL$                          TABLE
             2 I_                             INDEX
             3 CON$                           TABLE
             4 UN                             TABLE
             5 C_COBJ#                        CLUSTER
             6 I_                             INDEX
             7 PROXY_ROLE_DATA$               TABLE
             8 I_                             INDEX
    8 rows selected.
    SQL>
    SQL> -- yet to be published to the production table
    SQL> -- includes records previously published and then updated, but not new
    SQL> -- records which have never been published (these are in the other query)
    SQL> SELECT u.id, u.col1, u.col2
      2  FROM t_user u, t_published p
      3  WHERE u.id = p.id
      4  AND (
      5    u.col1 != p.col1
      6    OR u.col2 != p.col2
      7    )
      8  ;
            ID COL1                           COL2
             2 I_USER1                        INDEX
             4 UNDO$                          TABLE
             6 I_OBJ#                         INDEX
             8 I_IND1                         INDEX
    SQL>
    SQL> -- new and deleted records
    SQL> SELECT * FROM(
      2    SELECT id, col1, col2, 'NEW' status FROM t_user
      3    UNION ALL
      4    SELECT id, col1, col2, 'DELETED' status FROM t_published
      5  )
      6  WHERE id NOT IN(
      7    SELECT id FROM t_user
      8    INTERSECT
      9    SELECT id FROM t_published
    10  )
    11  ;
            ID COL1                           COL2                STATUS
             9 I_CDEF2                        INDEX               NEW
             1 ICOL$                          TABLE               DELETED
    SQL> Basically, I think the answer is in the query and not the Where Clause.
    I think you want one of these queries in each block, so set the blocks' queries (are you changing that query based on the parameter passed in, or just executing the query, you're not clear on that) and then do
    go_block('B2');
    execute_query;
    go_block('B1');
    execute_query;

  • Re: Unable to obtain reference to plan in distributedenvironment

    An interface is just a definition of attributes and methods it contains no
    code.
    The code is in the class that implements the interface therefore even if you
    can pass an objet
    reference as an interface you better be sure that the object that will use
    it has access to the
    implementation. It works when it is not distributed because all of the code
    is on the client therefore
    is accessible. Once distributed the partition containing PlanSO does not
    have the library wich
    contains the code for PlanTester therefore it cannot execute code in
    PlanTester. If you
    want to do this and not put PlanTester in the same Plan as PlanSO I think
    you can copy the library wich
    contains the code for PlanTester on the server partition and it will not
    give you a deserialization error
    because now it can find the executable code.
    This could also probably happen with inheritance for example you have a
    class defined in plan A (ClassA)
    and plan B (wich has plan A supplier) contains a sub-class of ClassA
    (ClassB). Now in one partition you
    create a ClassB then pass it to another partition in a method call of
    signature Something( p1 : ClassA ).
    ClassB will fit in ClassA so you can do this (it will compile). But if the
    partition on wich you are calling
    Something() does not have access to the code in plan B then you will get a
    deserialization error.
    Hope this helps a little.
    Christian Boult ([email protected])
    Programmeur - Analyste
    Influatec inc.
    -----Original Message-----
    From: [email protected] <[email protected]>
    To: [email protected] <[email protected]>
    Date: Friday, April 16, 1999 8:25 AM
    Subject: Unable to obtain reference to plan in distributed environment
    >
    Forte Folks,
    I have run across a supplier plan problem that I think I can explain, but
    would like confirmation and maybe further explanation on.
    The scenario:
    I have three plans (PlanClasses, PlanSO, and PlanTester).
    PlanClasses has classes that PlanSO and PlanTester use, including an
    interface called Observer.
    PlanSO has the Service Object and the class that the service object
    implements. It has PlanClasses as a supplier.
    PlanSO has a method called subscribe(Observer client).
    PlanTester is a test client. It has PlanClasses as a supplier. It
    implements the Observer interface, and calls the subscribe method on the
    PlanSO passing itself as an Observer to the SO.
    Anybody see the problem yet?
    I run this scenario locally and everything is dandy.
    I run this scenario distributed and I get a deserialization exception. The
    PlanSO cannot deserialize the test client object that I had passed as an
    Observer because it cannot find a reference to the PlanTester plan.
    If I put the test client class in the PlanSO plan and partition the
    application the same way, it works. The service object needed a reference
    to the test client class even though it is getting passed as an Observer
    interface.
    Has anybody run into this before? Can anybody give me further details on
    why forte has this problem?
    I get the same exception whether the distrubted runtime property on the
    client is allowed or not allowed. If it is set to not allowed, shouldn't
    only an OID be sent across, and no serialization need take place? If the
    object is serialized, I guess it does make sense that you would need the
    details of the implementation of the interface (da class) to deserialized
    it.
    Note: I used this pattern instead of Forte events because I am going to
    expose the SO as a CORBA interface. Right now, for the forte clients that
    use the SO, I am registering for forte events which are posted by the SO to
    get around this problem.
    Thanks for any responses,
    Chris Henson
    www.atgs.com
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    An interface is just a definition of attributes and methods it contains no
    code.
    The code is in the class that implements the interface therefore even if you
    can pass an objet
    reference as an interface you better be sure that the object that will use
    it has access to the
    implementation. It works when it is not distributed because all of the code
    is on the client therefore
    is accessible. Once distributed the partition containing PlanSO does not
    have the library wich
    contains the code for PlanTester therefore it cannot execute code in
    PlanTester. If you
    want to do this and not put PlanTester in the same Plan as PlanSO I think
    you can copy the library wich
    contains the code for PlanTester on the server partition and it will not
    give you a deserialization error
    because now it can find the executable code.
    This could also probably happen with inheritance for example you have a
    class defined in plan A (ClassA)
    and plan B (wich has plan A supplier) contains a sub-class of ClassA
    (ClassB). Now in one partition you
    create a ClassB then pass it to another partition in a method call of
    signature Something( p1 : ClassA ).
    ClassB will fit in ClassA so you can do this (it will compile). But if the
    partition on wich you are calling
    Something() does not have access to the code in plan B then you will get a
    deserialization error.
    Hope this helps a little.
    Christian Boult ([email protected])
    Programmeur - Analyste
    Influatec inc.
    -----Original Message-----
    From: [email protected] <[email protected]>
    To: [email protected] <[email protected]>
    Date: Friday, April 16, 1999 8:25 AM
    Subject: Unable to obtain reference to plan in distributed environment
    >
    Forte Folks,
    I have run across a supplier plan problem that I think I can explain, but
    would like confirmation and maybe further explanation on.
    The scenario:
    I have three plans (PlanClasses, PlanSO, and PlanTester).
    PlanClasses has classes that PlanSO and PlanTester use, including an
    interface called Observer.
    PlanSO has the Service Object and the class that the service object
    implements. It has PlanClasses as a supplier.
    PlanSO has a method called subscribe(Observer client).
    PlanTester is a test client. It has PlanClasses as a supplier. It
    implements the Observer interface, and calls the subscribe method on the
    PlanSO passing itself as an Observer to the SO.
    Anybody see the problem yet?
    I run this scenario locally and everything is dandy.
    I run this scenario distributed and I get a deserialization exception. The
    PlanSO cannot deserialize the test client object that I had passed as an
    Observer because it cannot find a reference to the PlanTester plan.
    If I put the test client class in the PlanSO plan and partition the
    application the same way, it works. The service object needed a reference
    to the test client class even though it is getting passed as an Observer
    interface.
    Has anybody run into this before? Can anybody give me further details on
    why forte has this problem?
    I get the same exception whether the distrubted runtime property on the
    client is allowed or not allowed. If it is set to not allowed, shouldn't
    only an OID be sent across, and no serialization need take place? If the
    object is serialized, I guess it does make sense that you would need the
    details of the implementation of the interface (da class) to deserialized
    it.
    Note: I used this pattern instead of Forte events because I am going to
    expose the SO as a CORBA interface. Right now, for the forte clients that
    use the SO, I am registering for forte events which are posted by the SO to
    get around this problem.
    Thanks for any responses,
    Chris Henson
    www.atgs.com
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • External lib classes not found

    Starting with 4.5, Flex has complained about the linking of all my projects with default css because they were "merged into code".  The problem described sounded exactly like what I've dealt with- skins randomly not being found despite being defined in defaults, or being overridden by the parent.  I've switched the merge type to external, but now the classes defined in my lib projects can't be found.  Is there something else I need to do to make them work?

    Maybe further clarifiication will help...
    I have SmallLib, BigLib, and FlexProj.  BigLib references SmallLib (via external link type).  FlexProj references both (via external link types).  When I run FlexProj, it instantly crashes, saying it can't find my main Application (I use a custom Application class, which is in BigProj).  What do I need to do to get FlexProj to find the classes that are in BigLib and SmallLib?
    Thanks

  • My apps won't update, can someone help?

    I go on the app store to update my apps, but it says it is unable to connect, but it will let me on everything else. Can someone please help me, I have no idea what to do.  

    Try logging out of your account on the iPad by tapping on your id in Settings > Store and then log back in and see if it then works.
    If that doesn't solve it then what has worked for some people is going into Settings > General > Date & Time and change the date to a few months (or maybe further) in the future and then re-trying

  • MacBook Pro hangs in gray screen after security update

    I just allowed software update to download and install the latest Safari update and Security update 2009-6. During the restart my macbook just hung on the gray screen with the spinning gear.
    I've tried starting in target disc mode but the iMac doesn't register the MacBook's HD.
    I started from the install disc (10.4.5) and launched disk utility but it won't read the HD either.
    I tried Safe Start-up, resetting PRAM but still the hang.
    HELP

    If not for the coincidence of a restart after those software updates, I'd say
    you need to get a correct version of Disk Warrior and see if it can repair
    or rebuilt the hard disk drive's Directory; since that is a likely item to have
    been corrupted somehow. And corruption of data, including the System,
    is a serious matter. A near-capacity or over-full hard disk drive can cause
    odd issues including corruption; so there may be more going on in the Mac.
    Other third party disk utilities may be able to help get into the failed computer
    and the problem is likely the hard disk drive. Sometimes, a software issue in
    a drive can related to a hardware failure; so to consider a strategy that may
    include a new drive for the computer and an external enclosure for the drive
    that is now in the failed computer, to then maybe further your recovery of data
    if it comes to that.
    Good luck & happy computing!

  • Oracle VM aquire lock issues

    I have been trying to manage an Oracle VM environment (1 Master server, 3 VM servers) for several months. THe main issue we have is that every so often when we have to reboot a VM guest either when the guest is going down or coming up we will see the following error in the master server /var/log/ovs-agent/ovs_root.log file:
    "2011-06-08 22:28:44" ERROR=> OVSPolicyServer.execute_policy(): error. => errcode=00001, errmsg=CDS accquire lock /etc/ovs-agent/db/cluster.lock timeout. locker process is 14070
    StackTrace:
    File "/opt/ovs-agent-2.3/OVSPolicyServer.py", line 38, in execute_policy
    pool_ha_enable = db_load("cluster", "pool_ha_enable", get=True)
    File "/opt/ovs-agent-2.3/OVSCDS.py", line 159, in db_load
    cds = CDS(db_name)
    File "/opt/ovs-agent-2.3/OVSCDS.py", line 119, in __init__
    raise CDSLockTimeout(ERR_CDS_LOCK_TIMOUT, {
    The only way to get the VM guest to come back online is to shutdown all other VM guests running on the assigned VM server, reboot the VM server, then try again.
    I have tried many different avenues to get a resolution:
    1. Enabled the server pool/vm servers/vmguests to have HA. This was a disaster because every time the aforementioned error occurred the Master server would be affected and bring down the entire environment.
    2. Rebuilt the Oracle VM DB (remove all files in /etc/ovs-agent/db/ and restart/add servers to server pool)
    3. Disable HA and rebuild DB again.
    Still these errors occur and is really causing many customer issues with constant maintenance reboots. Has anyone seen this and discovered a solution?

    Hi,
    is your storage repository on NFS?
    And do you expirence short neworkt outage on the NFS storage maybe, before this error occurs?
    You can try with the following workaround:
    Stop all OVS-Agents: /etc/init.d/ovs-agent stop --disable-nowayout
    Delete the locks on the master: rm /OVS/.ovs-agent/db/*.lock
    Start all OVS_Agents: /etc/init.d/ovs-agent start
    Regards
    Sebastian
    Edited by: ssolbach on Jun 9, 2011 6:06 AM

  • WET54G does not seem to make WRT300N signal better

    Greetings,
    A friend of mine is using a WRT300N, firmware v0.93.3, as his router for both wireless and wired connections. The signal does not reach the other side of his house for whatever reason. The router is less than 25 feet from the other side of the house but the signal does not reach, so he purchased a WET54G. I set it up for him, while it was hardwired to one of the computers, then I unplugged it and moved it about 15 feet or so away from the WRT300N and kept it wireless. It does not seem to make the signal any better at the other side of the house. I went into the WET54G's settings and performed the Site Survey, selected the WRT300N's SSID and everything seemed to connect just fine. Is there anything else to this? What else could I try to make this signal better? I feel that there is probably some obstacles in the way of the signal, like an electrical breaker room and an air conditioning unit, as well as some dry wall, but I didn't think that would make a big deal. Anyway, here is some pertinent information about the devices, maybe my settings are just not right. Any help would be greatly appreciated. Also any recommendations on antenna's would be appreciated as well. Thanks in advance.
    WET54G:
    SSID: home
    Network Type: Infrastructure
    Security is Enabled and set to WPA Pre-Shared, TKIP, same password as WRT300N
    Device Ip: 192.168.0.109
    Subnet Mask: 255.255.255.0
    Gateway: 192.168.0.10
    Transmission Rate: Auto
    Authentication Type: Open
    Threshold: 2347
    Frag Threshold: 2346
    Cloning Mode: Disabled
    WRT300N:
    SSID:home
    Network Mode: Mixed (Which N,B, & G)
    Radio Band: Wide-40Mhz Channel
    Wide channel: 3
    Standard Channel: 5 - 2.432GHZ
    SSID Broadcast: Enabled
    Wireless Security: PSK Personal, TKIP, same pass as WET54G
    Mac Filter is Disabled
    Device Ip: 192.168.0.1
    Subnet Mask: 255.255.255.0
    Start Ip: 192.168.0.100
    Maxium Number of Users: 10
    Also, when I go to DHCP Reservation, which I am not using, I do not see the WET54G, but the desktop and the four labtops are listed.
    All the settings on the WRT300N, under  Advance Wireless Settings are all default.
    Any help would be greatly appreciated.
    Message Edited by MeGGa on 04-21-200705:44 PM

    I am a little confused by your post.  Are you using the WET to receive a wireless connection from the WRT300N, or are you trying to use the WET to retransmit the wireless signal, and thereby increase your wireless range?   The WET is designed only to receive a signal, then pass it along an ethernet wire to the computer's ethernet port.
    The WET54G is set to an improper fixed LAN IP address.  Fixed (static) LAN IP addresses must be OUTSIDE the DHCP server range.  Your DHCP server range is 192.168.0.100  thru 192.168.0.109.    I would suggest you set your WET to 192.168.0.226  (similar to its default), unless you have another device at that address.  At least get it out of your DHCP server range.   Because 192.168.0.226 is automatically recognized as a fixed LAN IP address, you do not need to make a "DHCP Reservation" for it.    Additionally, the gateway on the WET should be the same as the WRT300N's address, 192.168.0.1
    I'm not sure why you are getting such a poor wireless range.  However, there is no reason to expect the WET54G to have any more range than an ordinary wireless g card, unless you have put a high gain antena on it, like the HGA7S.
    Your WRT300N would be expected to have its best range if the computer that you were connecting to used a Linksys wireless N adapter.
    Also, make sure your WRT300N is properly located.  Try to minimize the obstructions between the router and computer.  Make sure the router is placed in an open area, and several feet up off the floor.   Do not place it behind your computer monitor, or with other obstructions around it.  The antennas should be verticle.  I am not sure of how to position that square antenna - perhaps twisting it to the right or left might help.
    Make sure the WRT300N has a unique SSID.  Do not use "linksys".  Do not use any SSID that your neighbors are using.
    Poor wireless connections are often caused by radio interference from other 2.4 GHz devices. This includes wireless phones, wireless baby monitors, Bluetooth (including Bluetooth game controllers), microwave ovens, wireless mice and keyboards, and your neighbor's wireless network. Even some 5+ GHz phones also use the 2.4 Ghz band. Unplug these devices, and see if that corrects your problem.
    In your router, try a different channel. There are 11 channels in the 2.4 GHz band. Usually channel 1, 6, or 11 works best. Check out your neighbors, and see what channel they are using. Because the channels overlap one another, try to stay at least +5 or -5 channels from your strongest neighbors. For example, if you have a strong neighbor on channel 9, try any channel 1 through 4.
    Also, you might want to try separating your own channels.  I see you are using 3 and 5.   Maybe further apart would help.
    Hope this helps.
    Message Edited by toomanydonuts on 04-22-200705:42 AM
    Message Edited by toomanydonuts on 04-22-200705:45 AM
    Message Edited by toomanydonuts on 04-22-200705:50 AM
    Message Edited by toomanydonuts on 04-22-200706:10 AM
    Message Edited by toomanydonuts on 04-22-200706:11 AM

  • Communications Suite 5 in Sparse Zones

    My machine has one NIC and one public IP
    I've been able to setup zones on my Solaris 08/07 x86 machine in this manner:
    3 sparse zones are on a private network (192.168.0.x) and have access to the internet via virtual interfaces of the physical NIC.
    From outside the machine, services are available on on the machine's public IP, using NAT and IPF I have a service like SSH forwarded to a zone. For example: ssh -l jdoe -p 2222 public ip (or fqhdn) gets me into a particular zone based on my ipnat.conf.
    I'm not sure I can do anything name based with the zones as far as forwarding the traffic appropriately based on name, especially with only one IP... or if this is a requirement for such an installation.
    My question here is: Would it be possible to do an installation of Communications Suite 5 that uses my zones - they would talk to each other on the local network, but be accessed by clients via one IP (and/or name) on the physical NIC.
    Would you recommend certain components on the global zone? Or would a scenario like:
    Dir in zone a
    Mail servers in zone b
    and Web server in zone c
    work?
    My uncertainty is in how the components will talk to each other as:
    Currently, on the inside, the zones are reachable only on their private IPs and actual ports - so once inside the global zone: ssh -l jdoe -p 2222 public ip (or fqhdn) doesn't work - I have to use the private IP and actual port, e.g. ssh -l jdoe -p 22 192.168.0.10 (or host name as mapped in /etc/hosts). Not sure if there is any fix for this, maybe further ipnat, ipf rules. Is this even a problem? Might this be transparent to end-user?
    Thanks for any insights,
    s7

    I got this working. But on my first go, I had a problem. Without the /etc/hosts entries, I was forced by the installer to use the local zone hostname during the WS, AM, DA, and CE installs; the problem is AM would redirect to this un-DNS-resolvable local zone name - e.g. when calling: http://gz.foo.bar.com/amconsole it would redirect to: http://silverzone.foo.bar.com/amconsole, which failed outside of the /etc/hosts environment.
    Here's what worked on the second go:
    Each zone's /etc/hosts files have entries for each other; and one -its primary entry- calling itself the GZ name.
    This allows the WS/AM/DA/CE to use the 1 DNS resolvable hostname I have (gz.foo.bar.com), and avoids AM redirecting to an non DNS resolvable zone name.
    For example, here's the silverzone's /etc/hosts file:
    # Internet host table
    # ::1 localhost
    127.0.0.1 localhost
    192.168.0.80 gz.foo.bar.com silverzone.foo.bar.com silverzone loghost
    192.168.0.60 orangezone.foo.bar.com orangezone
    192.168.0.70 purplezone.foo.bar.com purplezone
    During the install parts, (except for the DS itself, where I did use the GZ hostname), when it wanted the LDAP host or URL, I used the zone name - e.g. purplezone.foo.bar.com:389 - because if I used the gz.foo.bar.com name, it wouldn't work - because LDAP's on a different zone.
    This doesn't appear to be a problem as the LDAP name-based communication seems to happen entirely internally.
    For CE config: Since messaging and calendaring servers are on a different zone than CE/WS, I had to use their zone names when referring to the host/port ME and Cal. Exp. are listening on - I couldn't use the GZ name. Again, as each zone also thinks its the GZ - it would try to find those servers/ports on itself, and those servers are listening on a different zone.
    Basically, I tried to use the GZ name during the install wherever it would let me, and I increased the places it would by having the /etc/hosts entries. When I couldn't use the GZ hostname, I used the appropriate local zone names.
    My sequence was something like this (I used "configure now" option):
    0> install shared components and message queue in GZ
    1> purplezone: install DS and preparation script - start server, run prep script
    2> orangezone: install messaging and calendar servers
    3> silverzone: install WS, DA, CE, and AM (use GZ name for AM hostname)
    4> silverzone: configure DA
    5> orangezone: configure messaging an calendar
    6> silverzone: configure CE
    For calendar, I did have to edit the ics.conf and replace the entries for orangezone with gz.foo.bar.com.
    This thread helped me setup the inter-zone networking:
    http://forum.java.sun.com/thread.jspa?messageID=10137177&tstart=0

Maybe you are looking for

  • Loading a movie within a movie

    Hello! I am trying to figure out the proper code for my button which should unload a movie and then call an external swf to load within a specific frame of yet another movie clip. Hope that made sense. In other words... I want the new clip to load at

  • Image not being cleared

    Well, I've really managed to find an odd problem this time. Ok, so right now I have an applet, and it has a thread which does public void run() { while(true) { try {   runner.sleep(100);   repaint(); catch(InterruptedException e) { }Right, anyways th

  • "Open With" command locks up

    If I right mouse button click on something like an image and click on "Open With," the function locks things up and forces me to do a relaunch on Finder. This has been going on for quite some time, so I don't know where along the line the problem fir

  • Deauthorization of all computers login by my ID

    Dear friends, I had deleted my linux unfortunately and now installed WinXP again and now my itune is not authorizing my PC. Kindly anybody help me in this regard. Thanx

  • HT201363 forgot my answers to my security questions how can i retrieve them

    forgot my answers to my security questions how can i retrieve them