Unexpected NFSv4 permissions behaviour

I used to successfully mount my Arch desktop drives on my Ubuntu laptop, using NFS. Recently I switched my laptop (which is my production machine for work) to Arch as well. Since the desktop is configured as the NFS server, I assumed it was simply a matter of configuring the laptop as an NFS client. An overview of the config (as explained on https://wiki.archlinux.org/index.php/NFSv4 and http://www.brennan.id.au/19-Network_File_System.html):
Server
The /etc/exports config file:
/exports some-ip-address(rw,sync,no_subtree_check,no_root_squash,fsid=0)
/exports/home some-ip-address(rw,sync,no_subtree_check,nohide,all_squash,anonuid=99,anongid=99)
nobody:nobody is 99:99 is the same on both server and client. Further, the client and server share the same UID and GID in /home.
and bind the exports to the mounts in /etc/fstab
/home /exports/home none bind 0 0
the /etc/idmap.conf is identical on the client and the server:
[General]
Verbosity = 0
Pipefs-Directory = /var/lib/nfs/rpc_pipefs
Domain = dtu.dk
[Mapping]
Nobody-User = nobody
Nobody-Group = nobody
On the client the /etc/fstab specifies the mounting options as follows:
some-ip:/ /mnt/server nfs noauto,exec,rw,_netdev,sync 0 0
I can successfully mount the partitions on the client, I have read write access, but there is one thing I fail to figure out: if I check the permissions of the mounted share, I see (when doing a ls -lah on the client):
drwxr-xr-x 12 4294967294 4294967294 4,0K 10 apr 00:04 data1
Which indicates the file is owned by user -2:-2 (that's how Nautilus sees the UID:GID of the share). That is not what I expected: I was expecting to see the real owner of the mounted files. This is relevant especially when rsyncing my laptop (and I want to keep the file permissions) with the desktop configured as NFS server. I've been trying to fix this for weeks, but I fail to see what I am doing wrong. Changing the anonuid,anongid in the /etc/exports on the server to the actual owner has no effect. As if the id mapping deamon is not doing a proper job?
Am I missing the obvious here? Maybe the NFS configuration is correct, but some other system is not working along?

And indeed...rpcbind and nfs-common where not running on the client...stupid.
However, I only got the permissions the right when setting Nobody user on the client as the home user:
[Mapping]
Nobody-User = myhomeusername
Nobody-Group = myhomegroup
As far as I understand it, now I just force to have any unresolved mappings mapped to homeuser/homegroup on the client. I was under the impression that the /etc/exports on the server was supposed to make sure the file ownership was preserved on the client. The only other thing I can think of is that although the UID number is the same on both client and server, the UID name is not. I suppose both UID name and number need to be identical for a successful mapping.

Similar Messages

  • Unexpected NTFS permissions behavior

    I am administering a home server running Windows 2008 R2 (accessed by several Windows 8.1 clients) and have run into an NTFS permissions issue I do not understand.
    On this server there is a S drive that contains all the online storage for our network.
    The structure of the S drive looks like this:
    Files
      Public
        VariousPublicFolders
      Private
        Person1
          Person1sStuff
        Person2
          Person1sStuff
    Temp
    Before any security modifications were made, the root of the S drive had the following permissions:
    SYSTEM : Full Control
    Administrators : Full Control
    Users : Read and Execute
    Also, all the above folders on this drive are owned by the Administrators group.
    Then I started changing permissions in order to satisfy my security requirements.
    For example, I disinherited permissions on Temp and allowed "Everyone" Full Control there.
    That works fine.
    What doesn't work fine is the permissions I am trying to set on the personal folders under the Private tree.
    I want to make sure no one else (other than administrators) has access to anyone's personal data  (the data in the Person1, Person2, etc folders).
    So I disinherted permissions on each of those folders and then explicity set the following:
    SYSTEM : Full Control
    Administrators : Full Control
    PersonX : Full Control
    This, I thought should work beautifully.
    I thought it should give each person full control of what happens in their personal folder, and also allow administrators and the system to get in there.
    But it is giving people in the Administrators group problems.
    I have a user called Custodian which is part of the Administrators group, and I use this user to do administrative tasks.
    As Custodian, when I try to access someone's personal folder in Explorer I am greeted with a message that says "You don't currently have permission to access this folder."
    Whaaaaat?
    I am offered "Click continue to permanently get access to this folder."
    If I say YES then Custodian is explicitly added to the folder ACL, and I can proceed to access the files there.
    But I don't want this.
    Custodian is already an Administrator, and Administrators already have full control of the folder.  Why should I even be prompted, and even worse, have my user unnecessarily added to the ACL?
    This is bad bad bad.
    I think what is going on is that UAC is coming into play.  I read somewhere that Administrators normally do not use their administrative token until they hit something that requires it, then UAC kicks in and asks to elevate them.  This would be
    fine if it happened.  I guess I wouldn't mind having to click OK to enter a folder that requires admin access.  But this is not what is happening.  I am just blanket denied until I explicitly add the Custodian account to the ACL.
    Can someone explain why an admin user is denied access to a folder having full control given to admins?  What is going on here?
    Can someone suggest a way to rig permissions so that my admin user doesn't have to be added explicity?
    I just want to be able to browse and alter the entire drive with ANY administrative user without being bothered to alter permissions.  I don't even mind being prompted for elevation, although honestly I'd prefer not to have that happen either.

    Thank you, Milos, for taking the time to look at this.
    RE: Making the post shorter, I'm not sure I could.  It's not an easy problem to state, and I tried to do so in as few words as possible, any less and I would be leaving out critical info.
    Here are your questions, answered.
    (2) Yes, I am using a workgroup.
    (3) An example calcs output
    S:\Files\Private\Scott DATABANK\Scott:(OI)(CI)F
                           BUILTIN\Administrators:(OI)(CI)F
                           NT AUTHORITY\SYSTEM:(OI)(CI)F
    (4) The share permissions are full, yes, but this is immaterial.  I suffer the problem accessing the files locally/directly on the server.  I probably shouldn't have mentioned anything about the share.  It's just that the problem came to light
    while I WAS accessing the files through the share, but then I tried locally and still had the problem.
    I am unsure of what you mean in (5) (6) (7).  Could you elaborate?

  • Unexpected Campus Manager Behaviour

    Hi all,
    I recently installed LMS 3.2 (and the latest updates for CM, RME and HUM) and am experiencing some really strange behaviour in Campus Manager.
    After running a data acquisition job, the campus manager portal lists 41 devices (as expected), and I see a number of Best Practice Deviations, etc.
    When I click on the Best Practice Deviations, it generates the details in the report as expected, however, when I click on the number of devices (the hyperlink in the main CM portal page) I get the message below:
    No devices managed in Campus Manager. Run Data Collection and launch  again.
    I imported the list of devices from our ACS database into DCR successfully and verified all the connectivity through RME (I'm getting good config backups so I know the DCR objects and credentials are correct), and the deviations and discrepancies reports work properly, however, I'm a little confused as to what I missed in terms of telling CM WHICH devices to manage?
    In the device selection, I have it set to Auto Mode, and have selected "All Devices".
    However when I go to the "Include Devices" section, all of the groups are empty. Even if I click on "All Devices" the number of selected devices shown is always zero. (ie: it's like CM doesn't see ANY devices in the DCR)
    But really strange is when I go to the "Excluded Devices" and click on "All Devices", it shows the 41 devices in the selected list??
    I thought I'd pose the question here before getting TAC involved, thanks in advance for any insight you can provide!

    Ok Joe, a fresh install on a Wins2K3 server netted me the exact same result.
    ANI logs are attached.
    Here's the steps I followed for the install:
    1. Install OS and apply all Windows updates.
    2. Install LMS and reboot as required.
    3. Install Campus Manager 5.2.1 Update
    4. Install RME 4.3.1 Update
    5. Install HUM 1.2.1 Update
    6. Install CSCta13528-1.0 patch for CiscoView
    7. Reboot server
    8. Log into LMS Portal as admin, run Server Setup through Workflows
        8.1 Create new system identity (to be used for ACS integration later)
        8.2. Setup Device Credential Sets / Policies
        8.3. Import devices from Remote NMS (ACS)
        8.4 Allocate device to CM, RME, DFM, IPM
        8.5 Skip ACS Mode Change (Planned on doing this later)
        8.5 Finish Server Setup
    9. Access DCR from Common Services and delete 2 device entries (there are 2 devices in ACS DB that have AAA entries for both RADIUS and TACACS, "SEC-AP-G_RADIUS / "SEC-AP-G_TACACS" so I delete the RADIUS objects from DCR)
    10. Rename 2 devices in DCR (delete the "_TACACS" from the display name)
    11. Access Campus Manager, try to run Data Collection, task starts/stops immediately (maybe a second or two)
    12. Click on the "5 Devices" in the result column and get the message that there are no devices managed by Campus Manager.
    Logs are attached, please feel free to comment if my steps above are incorrect.

  • Unexpected 3 tier behaviour on 2 tier systems

    Hi Everybody,
       An issue has arisen with systems using lib_dbsl with patch levels 7.00 239 and 7.01 77. The discrepancy causes lib-dbsl to assess 2 tier systems,  as well as application instances running on the IBM i  database server, as 3 tier and to start the primary database connections via XDA. The workaround is to specify the profile parameter dbs/db4/connect_type = L in the instance profile. The problem will be solved, in each case, with the next lib_dbsl patch.
    Regards,
        Sally Power
        For SAP in IBM i Development Support

    Use Central Admin to one of the WFEs using PSCONFIG.exe.  Then use services on server to start the existing service applications on the WFEs and stop them on the App servers.  The one exception to this is Search where you will have to use PowerShell
    to redistribute the components to the WFEs.  Once everything is running on the WFEs, then you can use Servers in the Farm in Central Admin to remove the App servers from the farm.
    Paul Stork SharePoint Server MVP
    Principal Architect: Blue Chip Consulting Group
    Blog: http://dontpapanic.com/blog
    Twitter: Follow @pstork
    Please remember to mark your question as "answered" if this solves your problem.

  • Flash downloading blocks page nav

    I know my actionscript better than I know the browser
    environment, although I'm getting to grips with javascript because
    of the similarity. But some things like this are uncharted
    territory...
    With a couple of flv players downloading some 3Mb and 7Mb
    flvs after the web page has rendered, the page nav in the
    containing page is blocked. So a click on a nav button (not flash)
    in the
    will
    result in a new page load, but it waits until what I assume is
    one of two simulaneous http connection slots are freed up. This can
    take a while depending on the user's connection speed and is
    unusual/unexpected/sub-standard behaviour for a web page. Having
    the two videos loading simultaneously is a requirement rather than
    sequencing the loads.
    I know that browsers are limited to the number of
    simultaneous connections to a single domain for many reasons and
    that standards come into play. The flvs here are hosted at the same
    domain.
    I've built optional delay flashvars for each player to
    permitting staggering them somewhat but that's a guess at best and
    won't solve the problem for everyone and could disadvantage some
    with very fast connections. I should mention that the target user
    is a business client so broadband users and the requirement is that
    the videos are ready to play asap.
    I understand that placing the media in another domain is an
    option because the browser connections limit applies per domain. I
    also know you can change the settings for browsers to increase the
    connections per domain, but that's not helpful here... and that
    some browsers are less restrictive.
    What I would like to know is:
    Is there any simple way in flash to allow the regular html
    page nav to work as normal when flash has a stranglehold on the two
    simultaneous connections. Am I missing something obvious? TIA,
    GWD

    @clbeech: thanks for your reply.
    I've thought of things like using javascript via
    ExternalInterface to tell the players to change the contentPaths of
    the flvplaybacks to nonexistent files (I know that works for
    NetStream, don't know if it works for FLVPlayback). But I was only
    asked to work on the flash content for the page and if possible
    preferred to avoid making a requirement to add in extra js for the
    website developer and have the nav behave differently for that
    page. If that's the only way apart from using a separate domain for
    the media then I guess that's what I'll have to do.
    What I don't understand is why a browser page request doesn't
    over-ride the grip that flash has on the http connections... and
    perhaps there's something simple that I'm missing in either being
    able to do this or understanding the fundamentals of why it doesn't
    make sense to permit it in terms of how browsers work.
    Anyhow I can identify so far four solutions
    - other domain hosting of the content(not desirable)
    - configurable loading delays for starting the downloads
    (already in place, but not really a solution)
    - force single load queueing (not desirable)
    - javascript/ExternalInterface to tell flash players to
    "stop" download to free up connections
    Thanks again.GWD

  • Help with rescuing data from faulty disk

    Sorry the long wall of text, I wanted to explained everything as detailed as possible to not mess with the chronological order (TL; DR down below!)
    Ok, here's the thing. I have two hard drives, a ssd [sda] (for the OS [arch linux, ubuntu and win7], munchkin 240gb) and a sata 3 one ([sdc] wd, 1 tb)  that I use for everything else. A couple of weeks ago I found there were progressive, unexpected  and random behaviour particularly with windows, random freezes, no bsod, ecc... I finally found that the second hard drive was the culprit so I decided stop using a buy a new disk (while sending the faulty to RMA, since it's barely one year old, and I don't do nothing out of ordinary), I got a seagate with 1 tb.
    I honestly didn't want to mess too much about the data and wanted to save the most without wasting time, so I decided to try to clone the whole disk with dd according to the wiki (Disk Cloning). I imagined that the "bad sectors" were from the ntfs partition, so I went with that. Clearly I could get corrupt files in linux too, but my guess was that the most important data was safe, because I had been using it for the last couple of days without problems. The process ended with a lot of I/O errors, but in the end it showed the same number of `in` and `out`. I regenerated the UUID for the new cloned disk and each partition, modified the fstab and restarted (fingers crossed).
    I could log into arch linux without problems, used it for a while, restarted a second time (that windows habit that I still found hard to overcome) and everything was okay (yes I made sure I was actually on the new disk). Next was windows time, which to my surprise worked fine too, I even played a while. After patting myself in the back, I rebooted the machine thinking in how much work I would like to have done today, and there were an error that apparead on the login screen, just a ms before I pressed enter, I think I read it was referring to another partition of sda. The system got stuck in the middle of the "log in screen" and loading x with a black screen, nothing else. Waited a while, losing hope I pushed the RESET button.
    Now, it was stuck during the boot up process, after "killing rf radio" or something. It was obviously stuck somewhere else... (that message had appeared there since I built the pc) it read though that the /var partition was clean, same with the root partition, next I tried with "recovery" kernels, and each other kernel I had still in grub; but the problem persisted. Tried with ubuntu, had to change fstab again, rebooted, tried to enter, stuck on the loading screen with the big purple background. I couldn't get anything else past that, neither to get a decent log.
    I don't know why (or maybe because I was desperate) I entered the uefi/bios settings and found that the time was localtime instead utc.... weird I thought and I blamed windows (though, I had the registry hack working for a while) and imagined that I updated something that removed that *fixed* the blessed fix. Since I remember having issues with partitions not being correctly mounted once because of time issues, I thought it was a problem with the time... nope, nothing changed.
    I tried again with the recovery kernel of the ubuntu install, I could drop on the root shell, and tried to fix the clock time after remounting, same with mounting the /var partition (ubuntu doesn't use that one), it threw that "bad superblock error", but fsck ran saying the filesystem was clean and it was mount normally later. Somewhere between, I mounted the home partition, no errors, gave a quick navigation. I tried to install ntpd to follow the "troubleshoot" from the Time arch wiki page,  but HA, no internet.
    Rebooted again, and from the menu from "ubuntu recovery", allowed me to enable networking. Quite nice I thought, and pressed enter. It politely informed that I needed to mount the system (root partition) in read/write instead read-only. I happily said yes (I already had remounted the ubuntu root as rw, and worked ok), but stopped at the home partition taking about 10 minutes in "phase 2", suddenly started throwing a lot of errors, about missing inodes, files pointing nowhere, etc. It put a lot of files into lost+found directory. Oh, there it was... I thought, let's go into Arch Linux then...
    Now the system booted up quite fast, and I was welcomed by the black screen asking coldly for my login information, I wrote the credentials, pressed enter, aaaaand PAH, "/home/pablo" change directory failed: No such file or directory. HA, funny I thought, lsblk, the partition was mounted, ls -l /home/... WHAT? the folder wasn't there.
    I looked for the lost+found folder inside the home directory, and it had a lot of directories and files called DXXXXXX.RCN, and IXXXXXX.RCN and some symbolic links. The directories had files inside, the IXXXXX are actually files, ecc.
    Right now, I'm lost... I'm not scared about the data, because it's still living happily (?) on the bad disk, but I still need to recover it, so I can filter it later allowing to write zeros to the fault disk and send it for a replacement. Is there any chance to recover the data? (I rather not going into around 100 files/directories with cryptics names), or maybe use another method to "recover" the data from the previous disk? using cp or rsync? Any good experience or advice on this?
    PS.- Sorry for the epic story, I hope I don't scare readers or other fellow members.
    TL; DR
    + I have 2 disks (ssd [OS {win7, arch and ubuntu}] and sata3 [data])
    + Sata3 disk started to fail on windows (users folder it's on the sata3), arch/ubuntu never had a problem
    + I bought a new one, and I'm trying to send the old one for a RMA
    + Clone the disk according to the wiki and rebuilt the UUID
    + Change fstab to use the new cloned disk
    + Worked fine for a couple of times
    + Used windows, played, everything fine
    + Entered arch, now I couldn't it got stuck without showing any errors
        + Tried every kernel and recovery image without success
    + With Ubuntu I could drop into a root shell
        + Mounted home partition, no problems
        + Tried to activate networking, started to run fsck and threw a lot of errors (I remember I read something about "offset" not correcting) and moved files to lost+found directory
    + Arch Linux started, but I couldn't log because /home/pablo doesn't exist anymore and it seems everything lives now in the lost+found directory with weird names like DXXXXXX and IXXXXXXX
    Aftermath: I have the original data, safe (?) on the bad disk (like I said, I never had any problem in my home partition). I really don't want to be nit picky about what I can save, I'll be happy to save as much as I can, but I don't mind losing corrupt files (that's why I thought cloning with filling with zeros would be enough). I just want to ditch the bad disk and send it to wd for a replacement.
    End of TL; DR
    2014/07/29: Added a TL; DR section
    Last edited by pablox (2014-07-29 13:55:46)

    I can see three possibilities:
    1. The new HDD is broken as well.
    2. The Linux partition contained a little garbage from old HDD's errors and that confused Linux and caused it to corrupt the partition further by writing more garbage to it.
    3. The garbage from old HDD's errors contained on the Windows partition confused Windows and caused it to corrupt the Linux partition by overwriting it with garbage.
    1. Rather unlikely if the HDD is new, but still worth checking. Install smartmontools and run smartctl -a /dev/new-hdd. On the "SMART attributes" list, check if any of "Reallocated_Sector_Ct", "Reallocated_Event_Count", "Current_Pending_Sector", "Offline_Uncorrectable" have RAW_VALUE other than zero or if the "SMART Error Log" contains any entries. If the attributes are all zero and the error log is empty, everything is fine.
    2. Make new image of the old HDD. Make sure that Windows and Linux partitions don't overlap each other due to HDD read error. Run fsck on the Linux partition before mounting it to ensure that it doesn't contain errors which could lead to further errors. If it doesn't, it should be safe to mount and use it.
    3. You said that Windows partition certainly contains error. Don't run Windows and don't mount this partition in Linux without running some NTFS checker (Windows tools on some other Windows system or ntfsck on Linux) and fixing all errors contained there. Alternatively, you can reinstall Windows on the new HDD and manually copy all the files you care about that are still readable from the old HDD. BTW, don't modify anything on the old HDD, as this would likely corrupt its other contents in the same way that modifying its copy corrupted other parts of its copy.
    Last edited by mich41 (2014-07-30 22:30:11)

  • OS 10.7.5 Safari will not startup

    using OS 10.7.5 on 2008 MacBook Pro. Recently did a reinstall of the OS software. Software Update tells me there are no more updates.
    Trying to start Safari but it will not run.I get message telling me that Safari quit unexpectedly. Permissions repaired.
    Thank you for your help

    Launch the Console application in any of the following ways:
    ☞ Enter the first few letters of its name into a Spotlight search. Select it in the results (it should be at the top.)
    ☞ In the Finder, select Go ▹ Utilities from the menu bar, or press the key combination shift-command-U. The application is in the folder that opens.
    ☞ Open LaunchPad. Click Utilities, then Console in the icon grid.
    Step 1
    Make sure the title of the Console window is All Messages. If it isn't, select All Messages from the SYSTEM LOG QUERIES menu on the left.
    Enter the name of the crashed application or process in the Filter text field. Select the messages from the time of the last crash, if any. Copy them (command-C) to the Clipboard. Paste (command-V) into a reply to this message. 
    When posting a log extract, be selective. In most cases, a few dozen lines are more than enough.
    Please do not indiscriminately dump thousands of lines from the log into this discussion.
    Important: Some private information, such as your name, may appear in the log. Anonymize before posting.
    Step 2
    Still in the Console window, look under User Diagnostic Reports for crash reports related to the process. The report name starts with the name of the crashed process, and ends with ".crash". Select the most recent report and post the entire contents — again, the text, not a screenshot. In the interest of privacy, I suggest that, before posting, you edit out the “Anonymous UUID,” a long string of letters, numbers, and dashes in the header of the report, if it’s present (it may not be.) Please don’t post shutdownStall, spin, or hang logs — they're very long and not helpful.

  • Just passed 1z0-242

    hi friends,
    I have passed Test [1Z0-242|http://www.examcram.me/go.asp?e=1801&ac=8] (PeopleSoft Application Developer II: Application Engine and Integration). I have never seen a exciting product like ExamCram!!!!!! You peoples help me a lot for my exam and then lastly ExamCram.
    Regards,
    Joe Cannata

    Yeah, for some reasons I stopped my participation there (i.e. unexpected formatting code behaviour by the forum software). And reluctant to participate in Database General forum also (i.e. cause of the very "friendly" etiquette I don't like).
    Anyway, it would be nice to see again, I've think about that, but hard to leave before a while.
    ...to be continued offline... you know my email address...
    Cheers,
    Nicolas.

  • Basic ADODB Problem

    So, I want to implement a simple database project using VB 6 and Oracle 10g. A friend of mine who has done a similar project, sent me some sample of his code because I am new to DB programming. I had already implemented a simple working code. Here is the comparison:
    rs is the ADODB.recordset object
    and dbipl is the ADODB.connection object
    team is the tablename
    name is a column in the table
    His code:
    dbipl.Open "dsn=new;user id=system;password=pass;"
    rs.CursorType = adOpenStatic
    rs.CursorLocation = adUseClient
    rs.LockType = adLockOptimistic
    rs.Open "select * from team", dbipl, , , adCmdText
    Do While Not rs.EOF = True
    list.AddItem rs.Fields("name")
    rs.MoveNext
    Loop
    rs.Close
    dbipl.Close
    My Code:
    dbipl.Open "dsn=new;user id=system;password=pass;"
    set rs = db.Execute("select * from team")
    Do While Not rs.EOF = True
    list.AddItem rs("name")
    rs.MoveNext
    loop
    rs.close
    dbipl.Close
    I am not using the Fields keyword anywhere in my code. Also, he hasn't used the INSERT or UPDATE statements in his entire code because he is using something like this to update or insert records:
    rs.AddNew
    rs.Fields(0) = Text1.Text
    rs.Fields(1) = Text2.Text
    rs.Update
    I though we have to use the SQL statements like this:
    set rs = db.Execute("insert into table values (......)")
    I am guessing his advantage is that he doesn't need to use SQL statements for inserting records or updating his tables. I am completely lost and I don't know which method to use. Please clarify the difference in the two methods. Thank you for your time.

    Pratham wrote:
    Also, he hasn't used the INSERT or UPDATE statements in his entire code because he is using something like this to update or insert records:
    rs.AddNew
    rs.Fields(0) = Text1.Text
    rs.Fields(1) = Text2.Text
    rs.Update
    I though we have to use the SQL statements like this:
    set rs = db.Execute("insert into table values (......)")The AdoDB class generates that SQL itself if you do not explicitly provide a SQL statement. In fact, from long past experience doing Windows client development, drivers like AdoDB can be quite noisy and generate all kinds of SQL statements for "managing" the database client-server session - totally unbeknown to the developer whose code that AdoDB driver is executing. This can cause anything from a performance knock to unexpected run-time behaviour.
    Back then I always used the "+passthru+" option - telling the driver to bug out and only do the SQLs I explicitly coded and nothing else.
    No idea how the "modern" driver behaves... but I would think that not much have changed.
    If you do have to use SQL from your client code, then rather use explicit SQL. Code it yourself. Use bind variables (critical!). Use bulk binding. Do not rely on the driver to generate the appropriate and optimised SQL for you. It is seldom able to do it as well as hand crafted SQL.

  • My Safari quits unexpectedly.  I recently ran "repair permissions"  is disk utility and I think it started after that.

    My Safari quits unexpectedly.  I ran "repair permissions" in disk utility and I think is started happening after that.

    Safari: Unsupported third-party add-ons may cause Safari to unexpectedly quit or have performance issues

  • Unexpected behaviour upon pressing the 'Enter' key on dialog screen

    Hi.
    I have two dialog screens that exhibit unexpected behaviour when i pressed the 'Enter' key.
    Screen 1: When I pressed the 'Enter' key, the focus shifted to another input field and the content of the previous input field is cleared. The thing is, I did not have any codes in PAI for 'Enter'. Why is this happening?
    Screen 2: On load, some of the fields on this screen will be disabled. However, when I pressed the 'Enter' key, all the disabled fields become enabled. Again, I did not have any codes that handle the 'Enter' key in PAI. Why is this happening?
    Any help is appreciated.

    Hi Atish.
    Yes, I have used CHAIN... END CHAIN.
    I thought that CHAIN... END CHAIN allows my input fields to be re-activated after an error message is displayed.
    How would CHAIN... END CHAIN cause the unexpected behaviour?
    Thanks.

  • SAP Personas: An unexpected behaviour has been detected and you have been disconnected – please log in again.

    Hallo everyone,
    We are installing Personas and facing several challenges.
    Personas 2 SP02 was installed according to instructions of Note '1848339 - Installation and Upgrade note for Personas' and configured according to the Config Guide v1.3 (rel. May 2014). The referenced notes were also applied as well as the 'How to config - FAQ' blog by Sushant.
    We are able to log on and execute transactions and perform activities successfully (e.g. SE80, SPRO, KB15, etc.).
    When trying to copy a screen the following error appears: 'An unexpected behaviour has been detected and you have been disconnected - please log in again.'
    Thereafter one can simply continue executing transactions without having to log in again.
    Please see the download of the error attached as per blog: SAP Screen Personas Support Tips – client side logging
    The HAR is unfortunately too large to attach. Are there any alternatives?
    Thank you in advance for you assistance!
    Kind regards,
    Daniel
    Message was edited by: Daniel K

    Hi,
    I have never worked on SAP PERSONA but you can try below things
    1)try to use different user or J2ee_admin since it might be same user multiple session case
    2) Try with different browser since plugins can behave unexpectedly
    3)Make entry in host file also
    4) check dev_icm logs
    5) check on ABAP side for dumps in ST22
    Warm Regards,
    Sumit

  • Unexpected behaviour of Composer with adf-config

    This is a part of my adf-config.xml file:
    <cust:customizableComponentsSecurity xmlns="http://xmlns.oracle.com/adf/faces/customizable/config">
    <cust:enableSecurity value="true"/>
    <cust:actionsCategory>
    <cust:actionCategory name="personalizeActionsCategory" value="#{securityContext.userName eq 'weblogic' ? 'true' : 'false'}"/>
    <cust:actionCategory name="customizeActionsCategory" value="#{securityContext.userName eq 'weblogic' ? 'true' : 'false'}"/>
    </cust:actionsCategory>
    </cust:customizableComponentsSecurity>
    Section upon sets a simple EL to provide security restrictions for user in Composer. When I deploy an application and login at first time everything works fine. But when I log in with user other than 'weblogic' unexpected behaviour appears. After that even 'weblogic' user can not access to personalization and customization options anymore.
    So, the question is - is it a really unexpected behaviour, bug or I missed something? Here is a pretty clear and simple manual that I followed to try to make it work: http://docs.oracle.com/cd/E29597_01/webcenter.1111/e25595/jpsdg_page_editor_security.htm#autoId18
    Edited by: Igor_Petrov on 22.02.2013 5:56
    Edited by: Igor_Petrov on 22.02.2013 5:57

    More information: I tried rebuild again. The Messages folder now has 20418 entries and takes 7.54 Gb of disk space.
    I searched using Spotlight for a unique term which appears in one of the messages and found 8 instances of it. All are the same message, but are separate .emlx files.
    This is getting crazy ...

  • SimpleDateFormat unexpected behaviour

    Hi everybody,
    i'm facing an issue which is mainly due to my incomprehension of the inner mechanisms of date parsing in Java. I would like to parse a date from an input string with a specific pattern, but some inputs match this pattern when i expect them not to. Let me explain it through an example:
    public class DateIssue {
         public static void main(String[] args) {
              SimpleDateFormat sdf = new SimpleDateFormat("dd/MM/yyyy");
              sdf.setLenient(false);
              try {
                   System.out.println(sdf.parse("10/02/20001"));
              } catch (ParseException pe) {
                   pe.printStackTrace();
    }Here i'm expecting a 4 digit year, but when i give "20001" as a year, the simpledateformat parses it just fine, and no exception is thrown. I checked the API where i found:
    Year:  If the formatter's Calendar is the Gregorian calendar, the following rules are applied.
    * For formatting, if the number of pattern letters is 2, the year is truncated to 2 digits; otherwise it is interpreted as a number.
    * For parsing, if the number of pattern letters is more than 2, the year is interpreted literally, regardless of the number of digits. So using the pattern "MM/dd/yyyy", "01/11/12" parses to Jan 11, 12 A.D.
    * For parsing with the abbreviated year pattern ("y" or "yy"), SimpleDateFormat must interpret the abbreviated year relative to some century. It does this by adjusting dates to be within 80 years before and 20 years after the time the SimpleDateFormat instance is created. For example, using a pattern of "MM/dd/yy" and a SimpleDateFormat instance created on Jan 1, 1997, the string "01/11/12" would be interpreted as Jan 11, 2012 while the string "05/04/64" would be interpreted as May 4, 1964. During parsing, only strings consisting of exactly two digits, as defined by Character.isDigit(char), will be parsed into the default century. Any other numeric string, such as a one digit string, a three or more digit string, or a two digit string that isn't all digits (for example, "-1"), is interpreted literally. So "01/02/3" or "01/02/003" are parsed, using the same pattern, as Jan 2, 3 AD. Likewise, "01/02/-3" is parsed as Jan 2, 4 BC.
    Otherwise, calendar system specific forms are applied. For both formatting and parsing, if the number of pattern letters is 4 or more, a calendar specific long form is used. Otherwise, a calendar specific short or abbreviated form is used.
    I suspect the following excerpt:
    For parsing, if the number of pattern letters is more than 2, the year is interpreted literally,
    to be responsible for this behaviour, but i thought parsing with lenient would avoid such a "mistake", and throw an exception because of the wrong number of digits. What is the best way to check that the year is exactly four digit long? Do I have to use string.matches() or a Pattern object just for this?
    Thanks a lot
    Edited by: calvino_ind on Aug 30, 2009 1:51 PM

    See this recent, and related thread: [Unexpected SimpleDateFormat behavior|http://forums.sun.com/thread.jspa?threadID=5404776&messageID=10802339]
    The behavior is unexpected intuitively but seems to follow the API contract. The number of digits apparently is only used to break up consecutive fields.

  • Unexpected ACL files everywhere after permissions verify.

    After I re-installed OS X 10.5.1 anew, on a recently zero's out HD, I ran the verify disk permissions utility. Then I ran repair. This is what i ended up with:
    Warning: SUID file "System/Library/CoreServices/RemoteManagement/ARDAgent.app/Contents/MacOS/ARDAg ent" has been modified and will not be repaired.
    ACL found but not expected on "Applications/iTunes.app".
    ACL found but not expected on "Applications/iSync.app".
    ACL found but not expected on "Applications/Photo Booth.app".
    ACL found but not expected on "Applications/Preview.app".
    ACL found but not expected on "private".
    ACL found but not expected on "System".
    ACL found but not expected on "sbin".
    ACL found but not expected on "Applications/Mail.app".
    Does anyone know what's going on here, and how to fix this?
    Thanks.

    I am having the same problems on 10.5.1 when attempting to repair the permissions. I believe the ACL's (Access Control Lists) are related to multiple users and the Parental Controls that were added under 10.5. The need for a repair permissions utility has always seemed a bit unorthodox to me. Apple needs to straighten this whole problem out with the permissions. In addition, when I am logged in as a "parental controlled user". I continually receive a "Managed Client" unexpectedly quit error, where the program fails and asks me to report it to Apple. Makes me wonder if the parental controls are actually working all the time.
    I also receive the following for multiple files during permissions repair:
    Warning: SUID file "usr/libexec/load_hdi" has been modified and will not be repaired.
    There is definitely a problem here.

Maybe you are looking for

  • How many times can you download a purchased product?

    I have multiple computers, can I download to more than one device?

  • How to increase the number of rows in Status Oveview iView in MSS?

    Hi We have implemented MSS and have a question regarding Status overview iview. The standard status overview iview of the team workset has 5 rows and we have to scroll using the arrow buttons to select/view a request. Now can anyone explain me how to

  • What is a serialized file?

    Hi all! I googled, but I seem to only be able to find out methods involving a serialized file, but I can't find out exactly what it is to start with. Any explanations/links welcome. Thanks!

  • Jet Express connect to Microsoft Dynamics NAV 2013 R2

    I meet some problem when I use Jet Express to show report in excel. The message error is below: The ulr 'http://navap:7047/DynamicsNAV71/WS/SystemService' is not valid. The remote server returned an error: (401) Unauthorized. Why are we get this mess

  • Sendredirect() from a servlet to a jsp page

    Hi all, From word document, I have some links like this: http://localhost:8080/Test/GetFileServlet?Name=Peter&Age=12 when I click on the link, the GetFileServlet will check for user authentication. If i haven't logged in yet, servlet will kick me to