Dev 6.0: Is this fun ?

Hi !
There are no any references to properties
For Page:
Width
Height
For Report:
Report Width
Report Height
But these was presented in Developer 2000.
How to change it ?
null

Some properties has been relocated to the sections Main, Header,
Trailer of Layout Model.
Andrei Kvasyuk (guest) wrote:
: Hi !
: There are no any references to properties
: For Page:
: Width
: Height
: For Report:
: Report Width
: Report Height
: But these was presented in Developer 2000.
: How to change it ?
null

Similar Messages

  • SAPGUI_PROGRESS_INDICATOR what is use of this fun module when sending data

    dear ,
    what is use of
    SAPGUI_PROGRESS_INDICATOR  fun module when sending data to FTP SERVER . POINTS MUST BE GIVEN

    Dear ,
    when i am trying to send data in internal table of type charterter declared below(i.e iresult) to FTP SERVER program giving the DATA ERROR = 3 when 'FTP_R3_TO_SERVER' fuction module is exected and file it not creating in ftp server . plz help me pointS must be given .
    the FTP_CONNECT ,FTP_COMMAND function modules are executing properly giving handle 1 and its sy-subrc = 0 .
    when 'FTP_R3_TO_SERVER' is executed it is giving SY-SUBRC = 3 ( DATA ERROR ) i.e it is failing to out internal table data in FTP SERVER . PLZ HIDE ME ITS URGENT .
    THIS IS CODE I USED .
    DATA : BEGIN OF iresult OCCURS 5,
    rec(450),
    END OF iresult,
    DATA :
    dest LIKE rfcdes-rfcdest VALUE 'SAPFTP',
    compress TYPE c VALUE 'N',
    host(64) TYPE c.
    DATA: hdl TYPE i.
    DATA: BEGIN OF result OCCURS 0,
    line(100) TYPE c,
    END OF result.
    DATA : key TYPE i VALUE 26101957,
    dstlen TYPE i,
    blob_length TYPE i.
    host = p_host .
    DESCRIBE FIELD p_password LENGTH dstlen IN CHARACTER MODE.
    CALL 'AB_RFC_X_SCRAMBLE_STRING'
    ID 'SOURCE' FIELD p_password ID 'KEY' FIELD key
    ID 'SCR' FIELD 'X' ID 'DESTINATION' FIELD p_password
    ID 'DSTLEN' FIELD dstlen.
    CALL FUNCTION 'FTP_CONNECT'
    EXPORTING
    user = p_user
    password = p_password
    host = host
    rfc_destination = dest
    IMPORTING
    handle = hdl
    EXCEPTIONS
    not_connected = 1
    OTHERS = 2.
    IF sy-subrc = 0.
    CONCATENATE 'cd' ftppath INTO ftppath SEPARATED BY space .
    CALL FUNCTION 'FTP_COMMAND'
    EXPORTING
    handle = hdl
    command = ftppath
    TABLES
    data = result
    EXCEPTIONS
    command_error = 1
    tcpip_error = 2.
    IF sy-subrc = 0 .
    CLEAR result .
    REFRESH result .
    CALL FUNCTION 'FTP_COMMAND'
    EXPORTING
    handle = hdl
    command = 'ascii'
    TABLES
    data = result
    EXCEPTIONS
    command_error = 1
    tcpip_error = 2.
    IF sy-subrc = 0 .
    DESCRIBE TABLE iresult LINES lines.
    blob_length = lines * width .
    clear : lines.
    Delete the existing file
    CONCATENATE 'del' ftpfile INTO delfile SEPARATED BY SPACE.
    CALL FUNCTION 'FTP_COMMAND'
    EXPORTING
    handle = hdl
    command = delfile
    TABLES
    data = result
    EXCEPTIONS
    command_error = 1
    tcpip_error = 2.
    *End of deleting the existing file
    CALL FUNCTION 'FTP_R3_TO_SERVER'
    EXPORTING
    handle = hdl
    fname = ftpfile
    blob_length = blob_length
    TABLES
    blob = iresult
    EXCEPTIONS
    TCPIP_ERROR = 1
    COMMAND_ERROR = 2
    DATA_ERROR = 3
    OTHERS = 4.
    IF sy-subrc 0 .
    WRITE 'Error in writing file to ftp' .
    ELSE.
    WRITE 'File downloaded on the ftp server successfully'.
    ENDIF.
    ENDIF.
    ELSE.
    WRITE : 'Path on ftp not found : ' , ftppath .
    ENDIF.
    CALL FUNCTION 'FTP_DISCONNECT'
    EXPORTING
    handle = hdl.
    CALL FUNCTION 'RFC_CONNECTION_CLOSE'
    EXPORTING
    destination = 'SAPFTP'
    EXCEPTIONS
    OTHERS = 1.
    ELSE.
    WRITE 'Could not connect to ftp' .
    ENDIF.
    ENDFORM. " FTPFINANCEACCESS_DOWNLOAD
    AT SELECTION-SCREEN OUTPUT.
    LOOP AT SCREEN.
    IF screen-name = 'PASSWORD'.
    screen-invisible = '1'.
    MODIFY SCREEN.
    ENDIF.
    ENDLOOP.

  • Why do i have apple_HFS on /dev/disk0s2 and apple_HFS macintosh hd on dev/disk1? is this normal? coz they're exactly the same partition

    /dev/disk0
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:      GUID_partition_scheme                        *750.2 GB   disk0
       1:      EFI                                                      209.7 MB   disk0s1
       2:      Apple_HFS                                          498.0 GB   disk0s2
       3:      Apple_Boot Recovery HD                      650.0 MB   disk0s3
       4:      Microsoft Basic Data BOOTCAMP         251.3 GB   disk0s4
    /dev/disk1
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:                  Apple_HFS Macintosh HD           *498.0 GB   disk1

    I have the same question for my iMac. I guest it is a logic drive that allows us to run very large drive without breaking the boot recovery. I notice my Macbook Pro doesn't have it since it has a much smaller drive.

  • Upload file to global directory in Dev, Q&A and Prod!

    I have an upload application in BSP, that uploads files to for example /usr/sap/BWD/files
    this works in Development, but of course this directory is not available in Production, so BSP won't work there.
    Isn't it possible to use one global directory?
    Right now somebody created for us a directory, that's the same on all 3 systems (Dev, Q, and Production)
    this dir is
    on Development:  DIR_TRANS     /usr/sap/transBW
    on Quality:  DIR_TRANS     /usr/sap/transBW
    on Production:  DIR_TRANS     /usr/sap/trans
    notice the small difference in path in Production... Is there a way to use the DIR_TRANS instead of the real path?
    my application writes data like this:
    fname = '/usr/sap/CBD/files/FILE.CSV'.
    OPEN DATASET fname FOR OUTPUT in TEXT MODE encoding default.
    if sy-subrc gt 0.
    WRITE: / 'Error opening file'.
    endif.
    LOOP AT data_TAB INTO LIN.
    TRANSFER LIN TO FNAME.
    ENDLOOP.
    CLOSE DATASET FNAME.
    thanks a lot, points will be rewarded for usefull answers!
    thanks!

    use transaction FILE to create logical path for the actual file path.
    and then use FM
    call function 'FILE_GET_NAME'
             exporting
                  client           = sy-mandt
                  logical_filename = pil_file  "Input logical file name
                  operating_system = sy-opsys
             importing
                  file_name        = p_i_file  "Physical file name
             exceptions
                  file_not_found   = 1
                  others           = 2.
    Regards
    Raja

  • Would like to know if this is correct disk configuration for 11.2.0.3

    Hello, please see the procedure below that I used to allow the grid infratstructure 11.2.0.3 oui to be able
    to recognize mY EMC San disks as candidate disks for use with ASM.
    we are using EMC powerpath for our multipathing as stated in the original problem description.I want to know if this is a fully supported method for
    configuring our san disks for use with oracle ASM becuase this is redhat 6 and we do not have the option to use the asmlib driver.Please note that I have
    been able to successfully install the gird infrastructure successfully for a 2 node RAC cluster at this point using this method.Please let me know if there
    any issue with configuring disks using this method.
    We have the following EMC devices which have been created in the /dev directory.I will be using device emcpowerd1 as my disk for the ASM diskgroup I will be
    creating for the ocr and voting device during grid install.
    [root@qlndlnxraccl01 grid]# cd /dev
    [root@qlndlnxraccl01 dev]# ls -l emc*
    crw-r--r--. 1 root root 10, 56 Aug 1 18:18 emcpower
    brw-rw----. 1 root disk 120, 0 Aug 1 19:48 emcpowera
    brw-rw----. 1 root disk 120, 1 Aug 1 18:18 emcpowera1
    brw-rw----. 1 root disk 120, 16 Aug 1 19:48 emcpowerb
    brw-rw----. 1 root disk 120, 17 Aug 1 18:18 emcpowerb1
    brw-rw----. 1 root disk 120, 32 Aug 1 19:48 emcpowerc
    brw-rw----. 1 root disk 120, 33 Aug 1 18:18 emcpowerc1
    brw-rw----. 1 root disk 120, 48 Aug 1 19:48 emcpowerd
    brw-rw----. 1 root disk 120, 49 Aug 1 18:54 emcpowerd1
    brw-rw----. 1 root disk 120, 64 Aug 1 19:48 emcpowere
    brw-rw----. 1 root disk 120, 65 Aug 1 18:18 emcpowere1
    brw-rw----. 1 root disk 120, 80 Aug 1 19:48 emcpowerf
    brw-rw----. 1 root disk 120, 81 Aug 1 18:18 emcpowerf1
    brw-rw----. 1 root disk 120, 96 Aug 1 19:48 emcpowerg
    brw-rw----. 1 root disk 120, 97 Aug 1 18:18 emcpowerg1
    brw-rw----. 1 root disk 120, 112 Aug 1 19:48 emcpowerh
    brw-rw----. 1 root disk 120, 113 Aug 1 18:18 emcpowerh1
    As you can see the permissions by default are root:disk and this will be set at boot time.These permissions do not allow the Grid Infrastructure to recognize
    the devices as candidates for use with ASM so I have to add udev rules to assign new names and permissions during boot time.
    Step 1. Use the scsi_id command to get the unique scsi id for the device as follows.
    [root@qlndlnxraccl01 dev]# scsi_id whitelisted replace-whitespace --device=/dev/emcpowerd1
    360000970000192604642533030434143
    Step 2. Create the file /etc/udev/rules.d/99-oracle-asmdevices.rules
    Step 3. With the scsi_id that was obtained for the device in step 1 you need to create a new rule for that device in the /etc/udev/rules.d/99-oracle-
    asmdevices.rules file. Here is what the rule for that one device looks like.
    KERNEL=="sd*1" SUBSYSTEM=="block",PROGRAM="/sbin/scsi_id --whitelisted --replace-whitespace /dev/$name",RESULT=="360000970000192604642533030434143",NAME="asm
    crsd1",OWNER="grid",GROUP="asmadmin",MODE="0660"
    ( you will need to create a new rule for each device that you plan to use as a candidate disk for use with oracle ASM).
    Step 4. Reboot the host for the new udev rule to take affect and then verify that the new device entry will be added into the /dev directory with the
    specified name, ownership and permissions that are required for use with ASM once the host is back online.
    Note: You will need to replicate/copy the /etc/udev/rules.d/99-oracle-asmdevices.rules file to all nodes in the cluster and restart them for the changes to
    be in place so that all nodes can see the new udev device name in the /dev directory on each respective node.
    You should now see the following device on the host.
    [root@qlndlnxraccl01 rules.d]# cd /dev
    [root@qlndlnxraccl01 dev]# ls -l asm*
    brw-rw----. 1 grid asmadmin 65, 241 Aug 2 10:10 asmcrsd1
    Step 5. Now when you are running the oui installer for the grid installation when you get to the step where you define your ASM diskgroup you should choose
    external redundancy and then click on the change disk dicovery path and change the disk discovery path as follows.
    /dev/asm*
    Now at this point you will see the new disk name asmcrsd1 showing as a condidate disk for use wiith ASM.
    PLease let us know if this is a supported method for our shared disk configuration.
    Thank you.

    Hi,
    I've seen this solution in a lot of forums but I do not agree or don't like at all; even if we have 100 luns of 73GB each.
    so the thing is, as in any other unix flavor we don't have asmlib***just EMCPOWERPATH running on differentes unix/linux flavors we dont like either udev-rules, dm-path and stuff***
    Try this as root user
    ls -ltr emcpowerad1
    brw-r----- 1 root disk 120, 465 Jul 27 11:26 emcpowerad1
    # mknod /dev/asmdisks
    # chown oragrid:asmadmin /dev/asmdisks
    # cd /dev/asmdisks
    # mknod VOL1 c 120 465
    # chmod 660 /dev/asmdisks/VOL*
    repeat above steps on second node
    asm_diskstring='/dev/asmdisks/*'
    talk with sysadmin and stoadm guys to garanty naming and persistents in all nodes of your RAC using emcpowerpath. (even after reboot or san migration)

  • How do I change the apple ID associated with my dev account to a different, already existing apple ID/account

    Hi,
    I currently have an iPhone Dev account, which is associated with a certain apple account.  I also have a different apple account, that I have already created.  I wish to change the dev account to use this other account.  I know you can change your apple ID, but I dont know how to change to an already existing account. When I tried to just change the appleID and email to match that of my other account, I just get an error saying that the email is already in use.
    Any thoughts?
    Thanks,
    rsm10

    Support - iOS Developer Program - Account Management

  • Not able to transport the BI Content(Dev) to Production.

    Hi Gurus,
    <h6>I'm transporting BI Admin Cockpit to prod and its ending with errors.
    I've successfully moved it to QA but while moving to Production it is ending with errors.</h6>
    <h5>I'm trying to push the below listed objects:</h5>
    <h6>0TCT_VC11                    InfoCube</h6>
    <h6>0TCT_IS11                     InfoSources </h6>
    <h6>0TCT_DS11     DW1CLNT500           Transfer_structure/Transfer_Rules</h6>
    <h6>0TCT_DS11      DW1CLNT500           DataSources(3X)</h6>
    <h6>ZPAK_4D8CABGMHNNDBZR20LDDZB07Y      InfoPakage </h6>
    <h5>Along with these I've collected all the objects which were being used in InfoSource/Communication_Struct.</h5>
    <h5>After moving it to Prod it ended with below mentioned error:</h5>
    <h6>Start of the after-import method RS_ISMP_AFTER_IMPORT for object type(s) ISMP (Activation Mode)</h6>
    <h6>DataSource 0TCT_DS11 does not exist in source system PW2CLNT100 of version A</h6>
    <h6>Mapping between data source 0TCT_DS11 and source system PW2CLNT100 is inconsistent</h6>
    <h6>DataSource 0TCT_DS11 does not exist in source system PW2CLNT100 of version A</h6>
    <h6>P.S. The DataSource is not from R/3 its BI DataSource.</h6>
    <h6>Kindly help me in transporting it to Prod.
    Many thanks in advance.</h6>
    Regards,
    Akhil

    Hi Srikanth,
    Compare the transport request which imported in QA & PRD from dev then you can find .
    If you transported the same request from DEV to QA & DEV to PRD then this might be possible.
    because of mapping of source system.
    if you have transported same req from DEV to QA & DEV to PRD then plz import that request from QA to PRD.
    Please check mapping of source systems in TX: RSA13 in DEV & QA systems.
    Hope this will help you.
    Thanks,
    Vijay.

  • Best practice on 'from dev to test'move.

    Hello.
    My Repository 10.2.0.1.0
    My client 10.2.1.31
    I am writing to ask someone what would be the best practice and most common_sense_oriented way to move OWB from dev to qa/test environment?
    I have read a number of recommendations on this forum and other oracle docs and it seem somewhat tedious exercise...
    At the moment i am either having to simply copy and paste (ye not very professional but works a treat!) then just re-sync the tables to point to correct location on some of the smaller of my projects.
    Now i have a huge project with hundreds of maps with different source locations etc.
    I want to move it into test.
    My test environment is where we test ETL process before implementing it in live as oppose to UAT test.
    I imported the tables into OWB etc from the test, now i want to move my maps from dev to test and this is where my HOW TO comes.
    I have different runtime repositories on my test as per oracle recommended (same names apply to dev and test and live repositories for the consistency purposes). Importing the maps from the export of dev to test doesn't really work and i don't really want start tweaking with export files.
    for some reasons the import only imports it back to the project the export was taken from.... (which is as useful as a smack on a head, in my humble).
    copy and paste the re-sync all tables would be madness, misery and pain all in one!
    So what do i need to do?
    1) OK i imported all the tables and views from the test environment into OWB
    2) How do i move my maps from dev to test?
    3) even if i copy them over - would i honeslty have to then resync tables in every single map (i am already crying by the thought of it)?!
    It seem a little tidious to me.
    I can imagine that there is no silver bullet and everyone have different ideas, but someone please share your experience on how would do it?
    Here is something from the user guide and no matter how many times i read this - i just don't get how i can relate it to what i need to achieve.
    The following quote is from "OWB User Guide", Chapter 3.
    "Each location defined within a project can be registered separately within each Runtime Repository, and each registration can reference different physical information. Using this approach, you can design and configure a target system one time, and deploy it many times with different physical characteristics. This is useful if you need to create multiple versions of the same system such as development, test, and production."
    As i said - i have all my tables imported from DB to OWB, now how do i make my maps to appear in repository on test? I can see the relevance of location to deploy maps into the test runtime repository, but before then i somehow have to make them to appear in my test runtime repository in Design and make sure they are referencing correct tables etc.
    Any help would be greatly appreciated.
    Kind Regards
    Vix

    Hello Oleg.
    Thank you very much for such detailed and very helpful reply.
    You are correct - i have my Design Centre and within it two projects - dev and test.
    Dev has all the locations pointing to the Development DB and it has it's own runtime repository/control centre configured.
    Test has all the locations pointing to the Test DB and it also has it's own runtime repository/control centre configured.
    I have one design centre and two runtime environments.
    both dev and test have identical tables etc. I moved the logic over form dev to test (all the functions, procedures etc), i have also imported tables and logic from TEST DB to the test project.
    All i want to do is now move over the maps from DEv to TEST. Which is not a problem (copy and paste are helpful), but then the copied maps are still point to the tables in Dev. Which means i have to sync it with test tables - i hope i am making sense here!
    I was hoping that there is some clever way of just changing something to effectively tell table in the map 'to point to the table in this database'. If the map is already configured - the only way to do it is to sync the tables, which will enable you to select the DB and table you want your table in the map to be pointing to.
    The reason i do not use imp/exp between projects - it is not really reliable. Have to then jump through the hoops ensuring all contrains etc are there. It is safer to just import whatever i need from DB - ensuring all my constrains etc are there.
    I do regular exports as a means of 'having a backup copy of the project', but never managed to import anything from one project to another (was easier with OWB 9 where it was possible to amend the .mdl file). It works fine to import back to the project the export was taken from.
    I don't have problems with the location etc - took me hours to set everything the way i wanted it to be and now all the deployments are going to the right schemas, DBs etc.
    Is there are any other way re-pointing the tables in the map to another DB? Like in the falt files - there is an option to choose the location of the file. So once the location is define/registered etc - you can choose whatever one is needed fromm the drop down on the left of the map.
    I hoped there would be something similar for the tables. Like a big bulk option for 'tick here if you want all tables in the maps to be pointing to identical tables in another DB) type thing. Guess something like bulk sync option...
    Oh well, guess i just have to stick with sync option (sobbing uncontrolably) and it hasn't stopped raining here for days!
    Once again - thank you very much for all your kind help and advice.
    Kind Regards
    Vix

  • Create a seperate master rep for dev and test

    Hi
    We are experiencing poor performance in our Dev/Test work repositories.
    This may be due to geographic reasons. The dev/test reps are in different location thousand miles apart from Master/prod work rep.
    Am considering creating a seperate Master rep for dev/test. Is this just a matter of:
    1. create sql space and new master repository including assigning unique repository ID
    3. Export/Import master rep from Production environment using MIMport utility in bin of ODI directory
    4. repoint existing dev/test work reps connections to new Master rep?
    This is a fairly low use ODI 10g implementation that support Hyperion Planning implementation. Not too many interfaces, changes etc. to manage day to day.
    Your advice is appreciated.
    Cheers

    Hi
    We are experiencing poor performance in our Dev/Test work repositories.
    This may be due to geographic reasons. The dev/test reps are in different location thousand miles apart from Master/prod work rep.
    Am considering creating a seperate Master rep for dev/test. Is this just a matter of:
    1. create sql space and new master repository including assigning unique repository ID
    3. Export/Import master rep from Production environment using MIMport utility in bin of ODI directory
    4. repoint existing dev/test work reps connections to new Master rep?
    This is a fairly low use ODI 10g implementation that support Hyperion Planning implementation. Not too many interfaces, changes etc. to manage day to day.
    Your advice is appreciated.
    Cheers

  • Its very urgent:while write code in commend promt i got this below error

    Hi Gurus,
    i am new to oaf.
    Please help me.
    i am trying to deploy oaf page to oracle apps.
    i write the code in cmd promt.
    D:\Jdeveloper\jdevbin\bin>import D:\Jdeveloper\jdevhome\jdev\myprojects\wnsgs\or
    acle\apps\ap\projectnodetailes\webui\ProjectnogetailesPG.xml -username apps -pas
    sword devapps -rootdir D:\Jdeveloper\jdevhome\jdev\myprojects -dbconnection "(DE
    SCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=wimuat1.wind.wnsgroup.net)(PORT=1524))(CO
    NNECT_DATA=(SID=FINDEV)))"
    'import' is not recognized as an internal or external command,
    operable program or batch file.
    plz help me.its very urgent.today is my last day plz plz help me
    Thanks
    Latha

    Hi Gyan,
    import C:\Jdeveloper\jdevhome\jdev\myprojects\exl\oracle\apps\per\ijp\webui\InternalPG.xml -rootdir C:\Jdeveloper\jdevhome\jdev\myprojects -username apps -password apps -dbconnection "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=172.24.2.18)(PORT=1528))(CONNECT_DATA=(SID=DEV)))"
    before i write this code here go to particular path and write this code.
    here where to go and write this code.
    latha.

  • Dev 6.0 to 8iEE Connection on same NT Box

    Hi,
    Seems from what I have read it is a common problem to connect
    Dev 6.0 to 8i on same NT Box. Reckon we all need this process
    documented. I am currently unsuccessful in this task and so need
    some help.
    1. Load NT with TCP/IP machine name "delly"
    2. Load 8iEE. 8iEE connects throught DOS command line and SQL
    windows interface OK. Database home Ora8i database name "abby"
    3. Load Dev 6.0. This did not contain a version of SQL easynet.
    4. Load Des 6.0. This added SQL easynet.
    5. Changed PATHs so that ORACLE8i bin directory preceeds wintnt
    bin directory. Sqlplus works again on DOS command line.
    Now stuck. Can't connect and machine attempts to dial out.
    Tried using easynet to achieve tcpip loopback by using machine
    name "delly" and ip 127.0.0.1. but failed. Do it need to start
    the listener (lsnrctrl START)? Do I need an application server.
    I am a developer with limited dba/NT experiance so would be
    greatful of someone would complete the steps.
    Thanks John
    null

    John Brakewell (guest) wrote:
    : Thanks John
    OK got it working.
    It was all down to tnsnames.ora and needed to edit files
    manually both for net8 and v2
    Seems much improved on forms 4.5
    null

  • [SOLVED] Is this considered a bug or it is intentional?

    Hello
    I just found something interesting.
    Open a terminal and become root. Run a program (like `php`) which will wait for user input.
    Now run `lsof -p` and find devices for its input/output (file descriptors numbered 0,1,2 which is at /dev/pts on my system). Lets say it is /dev/pts/2.
    Now if you see `ls -l /dev/pts`, you can see that /dev/pts/2 (and all other number which are input/output for other processes) are owning by my user!
    So this means I can access them and read/write them. I tested this in a python shell and I can totally read/write on input/output of that process.
    Although I can't run `lsof -p` for that root process under my own user, this is still some kind of problem.
    So I thought this could be a security issue. Reading/writing a root process' input/output from a normal user may lead to abuse.
    Is this intentional? A security bug, in kernel, maybe?
    Where to report this bug, if it is?
    Thank you
    P.S. I couldn't find anywhere related to security in forum, so I posted this here. If it is in wrong place, please move it.
    Last edited by thelastblack (2014-08-15 11:18:22)

    I can't replicate this.
    Yes, the pts is owner by my user, but that is not the root process, that is the terminal session that I opened as a regular user.  I open one urxvt window - call it urxvtA - that creates /dev/pts/0 owned by my user.  Then I open another - urxvtB - also as a regular user, and /dev/pts/1 is created and owned by my user.  I 'su' in urxvtB, no new pts is created, by /dev/ptsB is still owned by my user, of course.  From urxvtA I can write commands to /dev/pts/1 and they will show up in urxvtB, but they will not be executed.
    The /dev/pts/# are not the stdin and stdout of the process running in the terminal session (those are under /proc/), but they are just the connection to the original shell launched in that terminal.
    Further and more clear evidence: I start fresh again with no terminals running.  I start urxvtA which creates /dev/pts/0 owned by my user.  From that shell I I 'su' then from the root shell session I launch another urxvt (urxvtB).  As previously this creates /dev/pts/1, but this time /dev/pts/1 is owned by root.
    The pty session in the first case is executed by my user.  The terminal was opened as a regular user, and so a regular user can read what's in that terminal and type things into that terminal.

  • How to transport MIME object in webdynpro ABAP from DEV TO QA

    Hi,
    I have enhaced a std WDA component.In this enhacement I have added one image as an mime object along with some other changes , all the changes have been moved from dev to qa except this mime object.
    Infact I have deleted and reimported and re transported the mime object but still to failed this image from dev to qa.
    Please provide me ur valuable inputs asap as this is high priority issue.
    Pooja

    By going into mime repository and then right clicking ont eh mimeobject and selecting write transport entry form the context menu
    and creating a new request for the transport.
    However i guess you must have followed this approach only.
    by the way where is the mime stored. is it in the same standard comp folder.
    store it somewhere else in some non=-standard comp or some public folder and then try to transport.
    thanks
    sarbjeet singh

  • SYSTEM LANDSCAPE -   DEV and PROD - refresh

    Hi All,
    We have a situation like this.
    1. Our BW Q&A is a copy of PROD
    2. Our BW DEV is not either a copy of Q&A or PROD.
    3. We have issue with a particular process chain in PROD and we need to correct (and follow the lanscape..that is want to do the correction in DEV and the transport to QA and then to PROD)
    4. Since this particular process chain is not existing in the DEV environment, our Basis team is advising us to check the difference in DEV and PROD for as this process chain concern and then want to refresh it to DEV system so that we can carry out the change/modification.
    We don't know really what exactly we need to find the differences between DEV and PROD for this PC and then to inform to Basis team...
    Is it correct that since the particular process chain associated with various queries,data targets,update/transfer structures...etc will be over writen  /  copied / refreshed in DEV system from PROD?
    Please advise..Please help..

    Hi George, SAT, BWer,
    Thanks for your messages.
    George : Answers to your question:
    1. Our Basis team want to copy all the configuration (related to the particular process chain only)and not Data's.
    2. How to pack the process chain in a portable file and extract this file. then import it into DEV..and what is the way not to get error when doing so (knowing that fact that obejects will be different..) I don't know..whether to copy the objects are or not..
    3.I agree with you.. Even  we are ready to work with DEV to PROD..
    SAT: Answers to your questions:
    1. I do see some Process chains in the DEV and not related to this particular PC. Many development PC are there..and I don't know where the originally developed PC's ???!!
    2. I am new to this Client..sorry..how this has been deleted...
    BWer :Answers to your questions:
    1. Yes..PC is Latest in the PROD.
    2. We don't want to create the process chain all the way from scratch in the DEV. Yes..This is one of the process chain used in the metachain...Totally there are five chains in the PROD related to this..We need to modify only the second process chain..which we are trying to copy in the DEV and then do the changes...
    Hope , you are clear about our requirement.
    Please advise me what exactly we need to look into the DEV and PROD system so that required PC is copied alomg with (????) to DEV system..Please help.
    Thank you very much in Advance..

  • Replace chmod 666 /dev/dri/card0 with udev rule [Solved]

    Hello everyone ! , im configuring my xorg.conf to use 3d, i have a S3 Unichrome Pro (K8M800) chip , and i have it working right now , but in the manual that i read to do this theres a hack that i want to do the right way:
    Link
    a. First, the permission issue.  This is a dirty hack, but I have not taken any time to learn the innards of udev.  As root, type "chmod 666 /dev/dri/card0".  This will enable a regular user to use dri.  I know I should fix it via udev's config, and I intend on figuring that out in the near future and posting an update to this HOWTO.  For now, I put this command after "modprobe via" in my /etc/rc.d/rc.local file.
    So i was wondering if maybe someone can help me to do the udev trick, as i have in my rc.local the line but if this can be done via udev i want to do it that way.
    I was reading the udev article at the wiki but with all that KERNEL %k %n and that stuff i have no idea how to do it, if u think that its better for me to learn this the hard way maybe some usefull link will be good
    Thank you.

    Starting with /dev/hdd, there is already a rule for this in the default ruleset:
    BUS=="ide", KERNEL=="hd[a-z]", SYSFS{removable}=="1", SYSFS{media}=="cdrom*", NAME="%k", GROUP="optical"
    This creates /dev/hdd as follows:
    brw-rw---- 1 root optical 22, 64 2006-08-10 19:11 /dev/hdd
    so all you need to do is add yourself to the optical group. I don't use gnomebaker myself, but the underlying setup is the same for all burning apps.
    I was going to post a similar answer for your dri problem as well, because we do have this rule by default:
    KERNEL=="card[0-9]*", NAME="dri/%k", GROUP="video"
    which should create /dev/dri/cardN with root:video ownership, but I can't verify that - for some reason, my laptop has
    crw-rw---- 1 root root 226, 0 2006-09-08 09:11 /dev/dri/card0
    instead i.e. GROUP="video" is not applied. If yours shows up as root:video, however, just add yourself to the video group as well.

Maybe you are looking for

  • I can't sync my iPhone 5 in iTunes. This just started about 3 weeks ago???

    I bought my iPhone 5 on 11-7-12, 12:25:25 p.m., from the AT&T store in Norman, OK. Everything has been fine, up until about three weeks ago, around the time that the 6.0.2 iOS update was sent out. I don't *know* that that is what is causing the probl

  • OIM 11g selfdeprovision request error

    Hi , I have created the request template with request type as the selfdeprovision resource. As it is known we dont need to collect any data during deprovisioning. So we did not have a request dataset. And we have created OL and RL policies with auto

  • Problem with Query designing

    Hi All, I had created a query and started designing a crystal report. But after designing the report, i came to know that some infoobjects are missing in the query. So i went to query designer and added the corresponding infoobjects inorder, they cou

  • IWeb image source... a link?

    Is it possible on my iWeb-constructed webpage to have an image whose source is remotely linked (rather than a simple file on the local or .Mac server)? Note: I'm not refering to a hyperlink. For example, on my webpage I'd like to include the latest i

  • Cascading Drop Down Menus

    Can cascading drop down menus be created in Acrobat XI Pro?  If so, how?