Issue with vlc steaming through dolphin ftp client

I am not sure if i should post this here or in the multimedia section excuse me if i am mistaken. I have been using archlinux in almost 4 weeks with gnome 3. I have installed kde today after removing gnome(pacman -Rscn gnome). I have a small home server running ftp server where i have some media files(.mkv). In gnome i usually used nautilus to access the ftp folder(sftp://servername.dyndns.org/srv/) and run vlc to stream the medias. Tried to do the same with kde dolphin with no luck.
First i was not able to view the folder with dolphin. I researched abd found this thread with a solution:
http://forum.kde.org/viewtopic.php?f=18 … 5&start=15
Now i am able to access the folder, but can't stream any media using vlc.
The following errors are from vlc:
Your input can't be opened:
VLC is unable to open the MRL.
sftp://servername.dyndns.org/srv/filename.mkv
Check the log for details.
Going into tools->messages gives the followings:
main error: open of
sftp://servername.dyndns.org/srv/filename.mkv failed:(null)
I suspect something is wrong the ftp client configuration in dolphin. This bug report also bothers me also :
http://old.nabble.com/-Bug-274170--New% … 05158.html
Other thing is that i am able to open the media files using other media players(dragon player, mplayer), but then the files are automatically downloaded into the local folder /var/tmp/kdecache-username/krun/filename.mkv. The media player then loads the file from there. Its the same if i open a text file in the server, edit it and save it. Its first downloaded to the /var/ folder then asked to upload back to the server when closed.
Its my conclusion that the ftp client in dolphin does not work properly or KDE KIO works different then whatever is used in gnome/nautilus. I hope i have explained the issue as good as possible and would appreciate any help or lead to how to solve this.

Thanks for your reply, after checking some ideas I found that to get the ftp to work I needed to use the internal IP 10.0.0.1 for it rather than the old normal IP like before the firewall(probably a very beginner error sorry for that). And I discovered the exact issue causing clients to not see their characters if not using Hamachi.
Hamachi treats everyone join to the network as though they are local I believe, so when the server send character info via Hamachi it thinks it is sending the info locally and then Hamachi itself sends it out to the extrenal client, while tracing the data I found that the login process sends a 33 byte packet of data to the client via TCP from port 5051 out to the client on a rnadom port usually in the range of 40000-50000 telling the client to send a request to the other process to ask for the character information.
Now for some reason I see when a client logs in through Hamachi that packet is sent correctly to their Hamachi IP and recived fine.(sent from 10.0.0.1:5051 to the Hamachi IP:40k-50k) but when a client tries to log in using a normal IP with Hamachi turned off, the log in process does send the 33 byte packet from 10.0.0.1:5051 to the client WAN IP through the usual port but the client never recieves this packet and as such doesn't not request the character information.
So my guess is something on the 5505 is disallowing the log in process to send the data externally to the clients WAN IP's? Though this is very odd because it does allow the client to actually log in to the account and seems to recieve at least part of that information fine.
If any help that might resolve this for me can be given I would very much appreciate it, this issue is limiting my client base and as such my income and business as a whole. Thank you in advance for any help given.

Similar Messages

  • I'm trying to connect through the FTP client Filezilla. When I try to login with the wizard, it gives me a "503 Failure of Data Connection" reply; when I attempt to login myself, it gives me a "530 Login Authentication Failed." HELP!!!

    My current softward is: Mac OS X Lion 10.7.5 (11G63)
    When I attempt to use the Filezilla connection wizard I get the following message:
    Connecting to probe.filezilla-project.org
    Response: 220 FZ router and firewall tester ready
    USER FileZilla
    Response: 331 Give any password.
    PASS 3.7.1.1
    Response: 230 logged on.
    Checking for correct external IP address
    Retrieving external IP address from http://ip.filezilla-project.org/ip.php
    Checking for correct external IP address
    IP 27.0.19.56 ch-a-bj-fg
    Response: 200 OK
    PREP 52470
    Response: 200 Using port 52470, data token 1871898076
    PORT 27,0,19,56,204,246
    Response: 200 PORT command successful
    LIST
    Response: 150 opening data connection
    Response: 503 Failure of data connection.
    Server sent unexpected reply.
    Connection closed
    When I attempt to login Host/Username/Password myself I get the following message:
    Status:          Resolving address of amyhoney.com
    Status:          Connecting to 184.168.54.1:21...
    Status:          Connection established, waiting for welcome message...
    Response:          220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------
    Response:          220-You are user number 12 of 500 allowed.
    Response:          220-Local time is now 04:05. Server port: 21.
    Response:          220-This is a private system - No anonymous login
    Response:          220 You will be disconnected after 3 minutes of inactivity.
    Command:          USER 5475****
    Response:          331 User 5475**** OK. Password required
    Command:          PASS ********************
    Response:          530 Login authentication failed
    Error:          Critical error
    Error:          Could not connect to server
    Now before anyone points out the obvious: my username and password are correct. I've already gone through changing them so I know they are.
    Additionally, I've pretty much tried EVERYTHING I've read online, from messing with "terminal" (and subsequently the FTP and STFP options) to changing the sharing options and turning on file sharing/remote management as well as just turning off my Firewall completely.
    Now I've used Filezilla before when I first published my site and everything worked fine. My site is published through Wordpress so most of my editing was done through simply logging into my "wp-login." I recently changed the theme and in order to change the header image in that theme I have to do it through my "wp-content" folder, which means I need to use Filezilla. I feel like a complete moron right now considering I've had my site for about a year and can't even doing something this simple.
    I've read that the newer version of Lion/Mountain Lion don't support automatice FTP anymore, which (as I mentioned prior) I attempted to fix through Terminal. However, nothing I do seem to do works.
    Can someone walk me through fixing this? And I do mean 'walk me through'. I'm not a tech-savvy nerd who knows all the lingo, I just know the basics so sorry if my ignorance offends you.
    HELP!!

    First be sure login and password are OK. Sometimes the address starts wit "http://..." and sometime starts with "ftp://...". Try both normal FTP access and Scure FTP access (SFTP). At the end, contact the site's provider.

  • Issue with filtering KPIs through perspectives in a Tabular model

    I am having issues with trying to filter KPIs through a perspective.  In my fact table, I have three KPIs created out of SumOf measures.  There are three different user groups, and one user group wants a KPI specifically for their area, so it should
    not show up on the other perspectives.
    I remove the SumOf column from the list of fields under the Perspectives menu (there is not an option for a KPI...just Sum of [Column Name]).  When I select that perspective through my model view, it does not show the KPI under my fact table, which
    is what I would expect.
    When I try to analyze in Excel, or deploy to the server and connect to that perspective through a pivot table, at times all KPIs will show up (with my Sum of ... column removed) and other times, zero KPIs appear.  When I connect to the default perspective,
    however, everything appears.
    Has anyone ran into this issue before?  I have tried a process recalc on my cube through SSDB (I'm not sure if I did it right) and I tried a recalc through the Process menu to no avail.  Any help will be greatly appreciated because I am stumped
    at the moment.

    I've had problems with KPIs in perspectives (i.e. not showing up in the correct perspective). Unfortunately, I have not been able to find a resolution. At the time I experienced the issue, I tried googling and looking at the forums, but didn't
    come up with anything.

  • Issue with permissions when using SFTP (FTP over SSH)

    I have an issue when i use SFTP, for some reason users are able to browse the system's root directory and other user's directories. Also some users don't have access to the FTPRoot alias and some do. If i connect using FTP everything is fine. Can someone shed some light on this issue.
    Thanks,
    Toros

    There is absolutely no correlation between FTP and SFTP.
    SFTP is actually a file transfer run over SSH and therefore subject to the normal account/shell restrictions, just as if the user logged in via SSH.
    What you're confusing it with (easily done) is FTPS, which is SSL-encrypted FTP. This uses SSL/TLS to secure a FTP connection and is subject to the account restrictions defined in the FTP server, independent of the user's shell access.
    So, in other words, SFTP uses a SSH session to transfer files. FTPS uses SSL to secure a FTP session.
    There's no trivial way to prevent a SFTP user from walking through the directory tree since there's no difference between their SFTP session and an SSH session.

  • Deployment Issues with Custom TS variables set for client

    Good Day folks!
    I have come across an interesting issue that I have not been able to find a quick fix as such I am looking for some ideas where to trouble shoot my issue moving forwards.
    So the Issue:
    I have a TS that deploys a Windows 8.1 SOE image, this is done first by using a PXE boot PE image for an “unknown” systems to load a Custom HTA. This HTA allows me manually add the system to SCCM and add it to a required deployments collection that has the
    8.1 SOE deployed to it.
    The HTA also sets a few custom variables for the system resource things like system location, Machine Domain etc. 
    Once the HTA has run the system then has a delay to allow for the resource to show up tin the 8.1 deployment collection and then closes.
    Now all this appears to work fine, the system is added to the collection, reboots and the deployment runs from start to finish without error.
     I can also check the system resource and the variables are present.
    The problem I have found is that the custom variables for this resource are not being used by the TS after reboot.. upon further investigation I found that these variables are not even being retrieved ( ran a VB script to save all the variables from the
    TS to a txt file to check this )  Because of this the TS is bypassing some needed TS Tasks.
    A few interesting things to note:
    System appears to be added again when AD discovery is run…. So it causes a Duplicate.
    Client Dose connect to SCCM server after deployment but is not receiving deployments ( is getting some policy )
    Worked with SCCM 2012 but not 2012 R2
    So it appears that when the system reboots from the HTA PE step it has identified itself as an unknown system again…. Even though it has been manually added…
     I am interested to know if first of all if others would agree with this and second how SCCM while running a TS matches itself up to a system to retrieve the custom set variables before client install etc... Or a good place to start looking
    to dig up more information!  Or anything else!
    Thanks
    Stuart.

    Have you taken a look at this hotfix?
    http://support.microsoft.com/kb/2907591
    We had to apply it in order for our variables to be seen.
    joeblow

  • Issues with AUTO cycling through ....

    I'm trying to do this:
    Any help with one or the other is very much appriciated !!!
    1) When the Timer is finish auto cycling through the tabs (1 to 16) of the ViewStack, and switching over to tab (1) to STOP, I would like to address a function to do something ???
    The question is now how to write the code to ID that the Timer has come to a STOP on tab (1), and how can I incoperate this into the existing (onTimerOne) function.
    2) The second item I'm after is that if I'm amnualy select any tab (1 to 16) to address also a function to do something ???
    3) The third item I'm after is to automaticly zero (0) the ViewStack to tab number (1) if I click a Btn.
    <mx:Script>
    <![CDATA[
        import flash.events.TimerEvent;
        import flash.utils.Timer;
        private var timerOne:Timer;
        private function initOne():void {
            timerOne = new Timer(5000, myViewStack.numChildren);
            timerOne.addEventListener(TimerEvent.TIMER, onTimerOne);
        private function onTimerOne(evt:TimerEvent):void {
            if(myViewStack.selectedIndex == myViewStack.numChildren-1) {
            myViewStack.selectedIndex = 0;
            return;
            myViewStack.selectedIndex++;
        private function autoOne():void {
            if (!timerOne.running) {
            timerOne.start();
        private function manualOne():void {
            if (timerOne.running) {
            timerOne.stop();
    ]]>
    </mx:Script>
    4) Well, the fourth item I'm trying to work out is as I'm reading my data from an Xml file to have a TextArea which shows the different countries from the Xml file for each ViewStack tab while auto cycling through these tabs (1 to 16).
    The diffuculty here is that I use this Xml with a specific urlID="1" to urlId="16" as part shown below.
    <urlsOceania>
        <urlOceania urlID="1"/>
        <searchCountry>American Samoa</searchCountry>
        <etc></etc>
    </urlsOceania>
    I'm reading all the other items this way:
    source="{urlsOceania.urlOceania.(@urID==1).etc}"
    Thanks in advance aktell2007

    Thanks for the confirmation.  7 miles away is most likely using the same VZW tower but it does confirm the problem is not in your current location for us.
    You can look up local tower locations from many public websites such as the following:
    www.antennasearch.com
    www.cellreception.com
    http://www.evdoinfo.com/content/view/2990/63/
    The signal of -65 shows you have strong reception but it doesnt show the entire picture.  Your tower could be overloaded or unauthenticating you.  There are lots of little issues that exist outside of the raw signal strength between the towers and the connecting devices that we users have no control over.  As you may guess only a tower tech has access to identify and correct these things.
    Based on the picture of the back of the MBR1515/Netgear N300 router from Netgear I would assume that only a normal sized SIM card will fit.  I would not assume a micro SIM card will fit.  Since I do not have access to either of the VZW or non-VZW 4G LTE router I cannot confirm if it will work or not.  You might have to give Netgear a call and ask.  Based on what I can see from the User Guides of both devices the SIMs used for each should be compatible with eachother.
    If you decide to purchase the non vzw version please post back your findings for us.

  • Serious issue with VLC (even downgrade does not fix it?!?)

    Hello all,
    yesterday I've updated my Arch after a while. After update, VLC stopped working, it just segfaults on start. Here is the log output:
    $ vlc -vvv
    VLC media player 2.1.4 Rincewind (revision 2.1.4-0-g2a072be)
    [0x1ccf058] main libvlc debug: VLC media player - 2.1.4 Rincewind
    [0x1ccf058] main libvlc debug: Copyright © 1996-2014 the VideoLAN team
    [0x1ccf058] main libvlc debug: revision 2.1.4-0-g2a072be
    [0x1ccf058] main libvlc debug: configured with ./configure '--prefix=/usr' '--sysconfdir=/etc' '--disable-rpath' '--enable-faad' '--enable-nls' '--enable-lirc' '--enable-ncurses' '--enable-realrtsp' '--enable-aa' '--enable-vcdx' '--enable-upnp' '--enable-opus' '--enable-sftp' 'LUAC=/usr/bin/luac' 'LUA_LIBS=-llua -lm ' 'RCC=/usr/bin/rcc-qt4' 'CFLAGS=-I/usr/include/samba-4.0' 'LDFLAGS=-Wl,-O1,--sort-common,--as-needed,-z,relro' 'CPPFLAGS=-I/usr/include/samba-4.0' 'CXXFLAGS=-march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4'
    [0x1ccf058] main libvlc debug: searching plug-in modules
    [0x1ccf058] main libvlc debug: loading plugins cache file /usr/lib/vlc/plugins/plugins.dat
    [0x1ccf058] main libvlc warning: cannot read /usr/lib/vlc/plugins/plugins.dat (No such file or directory)
    [0x1ccf058] main libvlc debug: recursively browsing `/usr/lib/vlc/plugins'
    Segmentation fault (core dumped)
    I tried downgrading VLC back to 2.1.2, which used to work, but at my unpleasant surprise it crashes with the same error:
    $ vlc -vvv
    VLC media player 2.1.2 Rincewind (revision 2.1.2-0-ga4c4876)
    [0x1488058] main libvlc debug: VLC media player - 2.1.2 Rincewind
    [0x1488058] main libvlc debug: Copyright © 1996-2013 the VideoLAN team
    [0x1488058] main libvlc debug: revision 2.1.2-0-ga4c4876
    [0x1488058] main libvlc debug: configured with ./configure '--prefix=/usr' '--sysconfdir=/etc' '--disable-rpath' '--enable-faad' '--enable-nls' '--enable-lirc' '--enable-ncurses' '--enable-realrtsp' '--enable-aa' '--enable-vcdx' '--enable-upnp' '--enable-opus' '--enable-sftp' 'LUAC=/usr/bin/luac' 'LUA_LIBS=-llua -lm ' 'RCC=/usr/bin/rcc-qt4' 'CFLAGS=-I/usr/include/samba-4.0' 'LDFLAGS=-Wl,-O1,--sort-common,--as-needed,-z,relro' 'CPPFLAGS=-I/usr/include/samba-4.0' 'CXXFLAGS=-march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4'
    [0x1488058] main libvlc debug: searching plug-in modules
    [0x1488058] main libvlc debug: loading plugins cache file /usr/lib/vlc/plugins/plugins.dat
    [0x1488058] main libvlc warning: cannot read /usr/lib/vlc/plugins/plugins.dat (No such file or directory)
    [0x1488058] main libvlc debug: recursively browsing `/usr/lib/vlc/plugins'
    Segmentation fault (core dumped)
    During upgrade I noticed strange segfault whilst updating the VLC:
    upgrading vlc [####################################################] 100%
    /tmp/alpm_wQxpzI/.INSTALL: line 1: 10842 Segmentation fault (core dumped) usr/lib/vlc/vlc-cache-gen -f /usr/lib/vlc/plugins
    I even tried building my own VLC from git and after everything compiles, make fails with the same error:
    make[2]: Entering directory '/home/dodo/Build/vlc/vlc/bin'
    GEN ../modules/plugins.dat
    /bin/sh: line 4: 11486 Segmentation fault (core dumped) ./vlc-cache-gen ../modules
    I tried running the vlc-cache-gen under gdb to investigate the segfault it appears to be in glibc (?!?):
    (gdb) bt
    #0 0x00007ffff67ed44a in __strcmp_sse2_unaligned () from /usr/lib/libc.so.6
    #1 0x00007fffef413ab9 in g_str_equal () from /usr/lib/libglib-2.0.so.0
    #2 0x00007fffef4131e0 in g_hash_table_lookup () from /usr/lib/libglib-2.0.so.0
    #3 0x00007fffef4329a0 in g_quark_from_static_string () from /usr/lib/libglib-2.0.so.0
    #4 0x00007fffed7b989c in ?? () from /usr/lib/libgobject-2.0.so.0
    #5 0x00007ffff7dea9ca in call_init.part () from /lib64/ld-linux-x86-64.so.2
    #6 0x00007ffff7deaab3 in _dl_init_internal () from /lib64/ld-linux-x86-64.so.2
    #7 0x00007ffff7deec08 in dl_open_worker () from /lib64/ld-linux-x86-64.so.2
    #8 0x00007ffff7dea884 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2
    #9 0x00007ffff7dee3fb in _dl_open () from /lib64/ld-linux-x86-64.so.2
    #10 0x00007ffff726c02b in ?? () from /usr/lib/libdl.so.2
    #11 0x00007ffff7dea884 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2
    #12 0x00007ffff726c5dd in ?? () from /usr/lib/libdl.so.2
    #13 0x00007ffff726c0c1 in dlopen () from /usr/lib/libdl.so.2
    #14 0x00007ffff79623cb in ?? () from /usr/lib/libvlccore.so.7
    #15 0x00007ffff79476db in ?? () from /usr/lib/libvlccore.so.7
    #16 0x00007ffff794751a in ?? () from /usr/lib/libvlccore.so.7
    #17 0x00007ffff79473d9 in ?? () from /usr/lib/libvlccore.so.7
    #18 0x00007ffff794740e in ?? () from /usr/lib/libvlccore.so.7
    #19 0x00007ffff79470c3 in ?? () from /usr/lib/libvlccore.so.7
    ---Type <return> to continue, or q <return> to quit---
    #20 0x00007ffff7946ece in ?? () from /usr/lib/libvlccore.so.7
    #21 0x00007ffff7946a68 in ?? () from /usr/lib/libvlccore.so.7
    #22 0x00007ffff78bf944 in libvlc_InternalInit () from /usr/lib/libvlccore.so.7
    #23 0x00007ffff7bc573e in libvlc_new () from /usr/lib/libvlc.so.5
    #24 0x00000000004008fe in main (argc=2, argv=<optimized out>) at cachegen.c:99
    I tried downgrading glibc back to 2.8.12 (and all dependencies), but the error persists.
    The pacman upgrade log is here.
    NOTE:
    In my /etc/pacman.conf, I've got following packages set to ignore:
    IgnorePkg = linux linux-headers ati-dri mesa mesa-libgl xf86-video-ati xf86-video-vesa xf86-input-synaptics xf86-input-mouse xf86-input-keyboard xf86-input-evdev xorg-server gnupg libgcrypt glamor-egl
    This is because I have to stick with 3.10.10 kernel, 9.2.0 ati-dri, 1.14.4 Xserver and 1.5.3 libgcrypt (XServer dependency) because if I upgrade any of these packages, I end up with unbootable system (my laptop HP Compaq nx9420 RU478EA has got old ATI Radeon X1600 that obviously is not supported anymore with mesa 10 and kernel 3.11 and newer).
    Any ideas about how to make VLC work? It is my favourite video and DVB-T TV player...

    In the meantime I've solved the boot problem (see https://bbs.archlinux.org/viewtopic.php?id=178789) and I've been tracing steps to find out why VLC segfaults.
    So, I've pulled the latest VLC from their git and went on compiling. Of course, make failed while generating plugins.dat file. Target that builds that file uses vlc-cache-gen utility that segfaults. I've taken the liberty of modifying the source code and adding various printf's throughout the code to pinpoint the source code line that segfaults.
    The trace of vlc-cache-gen's source code showed me that segfault happens while calling libvlc_new function. I've traced eve further and found that segfaults happens in calling module_LoadPlugins (line ~153 in src/libvlc.c inside libvlc_InternalInit - note that line numbers have offset because of my printf's). Tracing even deeper guided me to function module_InitDynamic inside src/modules/bank.c (somewhere around line 638). For some plugins module_Load function call worked and for some segfaulted.
    By taking a look of implementation of module_Load function in src/posix/plugin.c, I've found out that line that segfaults is "module_handle_t handle = dlopen (path, flags);" when trying to load "../modules/.libs/libnotify_plugin.so", i.e. the issue is not in VLC, but in dlopen. I've tried removing libnotify_plugin.so from generated .libs/ plugins, but then segfault appeared on libkate_plugin.so. Moreover, I had to remove following plugins to avoid the segfault:
    libnotify_plugin.so
    libkate_plugin.so
    libgnomevfs_plugin.so
    gui/qt4/libqt4_plugin.so
    After removing those plugins, vlc-cache-gen did the job, but VLC still didn't work because obviously it requires at least libqt4_plugin.
    I've also tried reinstalling libnotify and then rebuilding VLC, but I've got nevertheless segfault on dlopen of libnotify_plugin.so and other mentioned libraries.
    How should I approach the problem now? Obviously something is wrong with dlopen - this explains why downgrading the VLC didn't fix the issue. How to debug dlopen? GDB backtrace of dlopen gives me following trace:
    #0 0x00007ffff67ed44a in __strcmp_sse2_unaligned () from /usr/lib/libc.so.6
    #1 0x00007fffeebfcab9 in g_str_equal () from /usr/lib/libglib-2.0.so.0
    #2 0x00007fffeebfc1e0 in g_hash_table_lookup () from /usr/lib/libglib-2.0.so.0
    #3 0x00007fffeec1b9a0 in g_quark_from_static_string () from /usr/lib/libglib-2.0.so.0
    #4 0x00007fffecfa289c in ?? () from /usr/lib/libgobject-2.0.so.0
    #5 0x00007ffff7dea9ca in call_init.part () from /lib64/ld-linux-x86-64.so.2
    #6 0x00007ffff7deaab3 in _dl_init_internal () from /lib64/ld-linux-x86-64.so.2
    #7 0x00007ffff7deec08 in dl_open_worker () from /lib64/ld-linux-x86-64.so.2
    #8 0x00007ffff7dea884 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2
    #9 0x00007ffff7dee3fb in _dl_open () from /lib64/ld-linux-x86-64.so.2
    #10 0x00007ffff726c02b in ?? () from /usr/lib/libdl.so.2
    #11 0x00007ffff7dea884 in _dl_catch_error () from /lib64/ld-linux-x86-64.so.2
    #12 0x00007ffff726c5dd in ?? () from /usr/lib/libdl.so.2
    #13 0x00007ffff726c0c1 in dlopen () from /usr/lib/libdl.so.2
    My guess that one of the strings in g_str_equal is either NULL or points to freed memory. But how to verify that?
    Any ideas would be highly appreciated.

  • Issue with non-english application in thin client - virtual directory issue

    Hi,
    I have just completed Siebel 8 installation on solaris machine along with SunOne Web server 6.1.
    I have installed both ENU and ARA language packs for Siebe Enterprise, Sweapps.
    The issue is after installation and repository installation for both ENU and ARA, I am able to open only ENU applications in thin client.
    When I try to open the ARa applications the browser gives "Page not found" error.
    I have verified first, whether the Siebel components running or not. Confirmed that for example for Callcenter SCCObjMgr_ara component is active and online.
    But what I observed is, in SUNOne web server instance's config folder, there is a file obj.conf.
    In this file there are entries for only enu applications like,
    NameTrans fn="pfx2dir" from="/callcenter_enu" dir="/siebelapp/sweapp/public/enu"
    There are no links for ARA applications.
    I think this could be the reason for issue.
    But while installing and configuring sweapps, I have selected the logical profile location as "/siebelapp/gtwysrvr/admin/Webserver".
    So this should create such links in obj.conf for ARA as it created for ENU apps. But it didn't happend.
    Please advise, how to get the ARA applications.
    Thanks
    Vamshi
    Edited by: user4619223 on Mar 23, 2010 11:36 AM

    I see some message about a proxy. Have you checked that you can use the proxy from you network?
    If not you should turn the proxy off.
    Timo

  • Issue with passing parameters through Java-JSP in a report with cross tab

    Can anyone tell me, if there's a bug in Java SDK in passing the parameters to a report (rpt file) that has a cross tab in it ?
    I hava report that works perfectly fine
       with ODBC through IDE and also through browser (JSP page)
    (ii)    with JDBC in CR 2011 IDE
    the rpt file has a cross tab and accpts two parameters.
    When I run the JDBC report through JSP the parameters are never considered. The same report works fine when I remove the cross tab and make it a simple report.
    I have posted this to CR SDK forum and have not received any reply. This have become a blocker and because of this our delivery has been postponed. We are left with two choices,
       Re-Write the rpt files not to have cross-tabs - This would take significant effort
    OR
    (ii)  Abandon the crystal solution and try out any other java based solutions available.
    I have given the code here in this forum posting..
    CR 2011 - JDBC Report Issue in passing parameters
    TIA
    DRG
    TIA
    DRG

    Mr.James,
    Thank you for the reply.
    As I stated earlier, we have been using the latest service pack (12) when I generated the log file that is uploaded earlier.
    To confirm this further, I downloaded the complete eclipse bundle from sdn site and reran the rpt files. No change in the behaviour and the bug is reproducible.
    You are right about the parameters, we are using  {?@Direction} is: n(1.0)
    {?@BDate} is: dt(d(1973-01-01),t(00:00:00.453000000)) as parameters in JSP and we get 146 records when we directly execute the stored procedure. The date and the direction parameter values stored in design time are different. '1965-01-01' and Direction 1.
    When we run the JSP page, The parameter that is passed through the JSP page, is displayed correctly on the right top of the report view. But the data that is displayed in cross tab is not corresponding to the date and direction parameter. It corresponds to 1965-01-01 and direction 1 which are saved at design time.
    You can test this by modifying the parameter values in the JSP page that I sent earlier. You will see the displayed data will remain same irrespective of the parameter.
    Further to note, Before each trial run, I modify the parameters in JSP page, build them and redeploy so that caching does not affect the end result.
    This behaviour, we observe on all the reports that have cross-tabs. These reports work perfectly fine when rendered through ODBC-ActiveX viewer and the bug is observable only when ran through Java runtime library. We get this bug on view, export and print functionalities as well.
    Additionally we tested the same in
        With CR version 2008 instead of CR 2011.
    (ii)   Different browsers ranging from IE 7 through 9 and FF 7.
    The complete environment and various softwares that we used for this testing are,
    OS      : XP Latest updates as on Oct 2011.
    App Server: GlassFish Version 3 with Java version 1.6 and build 21
    Database server ; SQL Server 2005. SP 3 - Dev Ed.
    JTds JDBC type 4 driver version - 1.2.5  from source forge.
    Eclipse : Helios along with crystal libraries directly downloaded from SDN site.
    I am uploading the log file that is generated when rendering the rpt for view in IE 8
    Regards
    DRG

  • Issue with mail trigger in a particular client

    Hi,
    We are facing a problem in our system where in mails are not getting triggered from one particular client. I have checked all the settings in SCOT and have gone through few notes which talks about SMTP configurations. This client was newly created and only this client has this problem. The same program works fine in other clients of the same system.
    When I trigger a mail from SBWP, i can see in outbox, but the start date and recipient is blank with start time as 00:00. Any inputs?
    Thanks in advance,
    Arun Raghavan

    Hi All,
    Finally got a solution for this. There seems to be some inconsistency in number ranges which created this problem. This seems to be a known error in client copy or system copy.
    Program RSBCS_NUMBER_RANGE will remove the inconsistencies.
    Thanks
    Arun Raghavan

  • How can I resolve as possible security issue with unauthorized computers through QuickTime , as a diagnostic and screen shots show evidence of a Mac computer and I don't have one?

    II'm trying to resolve an issue that I have with my iPhone 4s through QuickTime. I think it might be an embedded mms that might broadcasts my info as well as allows remote access sometimes. Any answers or similar activity?, I can support with screenshots of public information . This shows in my emIl accounts as well.

    Is your phone jailbroken? If it is not, you're probably not seeing what you think you're seeing. If your phone hasn't been jailbroken, it's certainly not being controlled remotely. What do you mean by an "embedded mms"? Are you sharing an Apple ID with anyone? Or could someone have gotten access to your Apple ID information?

  • Issue with invite accepted through google calendar.

    I sent an invite to an event through ical. The recipient accepted through google calendar which in turn sent me an email accepting. However, my original meeting status hasn't changed to reflect that it was accepted. Am I missing something here? How does iCal get updated to know that the recipients have accepted?

    UPDATE: My problem was not related to the Sync History, rather the fact the Apple decided to stop supporting Outlook versions older than 2003.  I realize that I shouldn't rely on software thats almost 10 years old, but I figured, if it ain't broke, don't fix it.  I resolved my calendar sync issues by upgrading to a newer version of Outlook.
    Before resorting to this, I successfuly rolled back my iTunes to an earlier version (following directions easily found via Google), but the older version of iTunes didn't recognize my "itunes_library.itl" file - in my iTunes folder.  Older versions of this file are archived, so subbing one of those into the main iTunes directory solved that problem, however, I lost about a month of changes to my library.  After tooling around with these issues, I finally decided that upgrading Outlook was a simpler solution.
    It is curious, however, that iTunes didn't identiy my problem for me, maybe with a messge like "iTunes no longer supports sync for your version of Outlook".  It is also curious that my probnlem was only for my calendar: my contacts and notes were still syncing fine both to and from the phone and computer. 
    Hope this helps others who experienced problems with iTunes and older verisons of Outlook.

  • Issue with Updating Apps through iTunes

    I have been having this issue for sometime now, and through multiple versions of iTunes. Whenever I open up iTunes and see that there are apps that I need up update, i'll click on applications, ill go and click update apps, I'll see the apps that need to be updated and ill click update. I put in my password, but when I do that, I get an error saying:
    We could not complete your iTunes store request. There is not enough memory available.
    There is an error in the iTunes Store. Please try again later.
    Obviously I have enough memory and space on my computer, but not sure if this is an error with the store itself or with my computer some how. I can download new apps with out a problem, purchase anything and download anything through iTunes, even after that error pops up, but I just cant update any apps. I can update them on my phone, and if I sync my phone with iTunes, and then try updating apps that might not be on my phone when I updated them, it will work then. Not sure what would cause this, and its not that big of a deal, just an annoyance and something I find a bit odd.
    Thanks for any info and advice.

    What part of no wifi did you not understand?
    I'm well aware that I can update through the app store, if you had read the original question properly you would see that.

  • Issue with sending mail through java stored procedure in Oracle

    Hello
    I am using Oracle 9i DB. I created a java stored procedure to send mail using the code given below. The java class works fine standalone. When its run from Java, mail is sent as desired. But when the java stored procedure is called from pl/sql "Must issue a STARTTLS command first" error is thrown. Please let me know if am missing something. Tried the same code in 11.2.0.2 DB and got the same error
    Error:
    javax.mail.MessagingException: 530 5.7.0 Must issue a STARTTLS command first. va6sm31201010igc.6
    Code for creating java stored procedure: (T1 is the table created for debugging)
    ==================================================
    create or replace and compile java source named "MailUtil1" AS
    import java.util.Enumeration;
    import java.util.Properties;
    import javax.mail.Message;
    import javax.mail.Session;
    import javax.mail.Transport;
    import javax.mail.internet.InternetAddress;
    import javax.mail.internet.MimeMessage;
    public class MailUtil1 {
    public static void sendMailwithSTARTTLS(String host, //smtp.projectp.com
    String from, //sender mail id
    String fromPwd,//sender mail pwd
    String port,//587
    String to,//recepient email ids
    String cc,
    String subject,
    String messageBody) {
    try{
    Properties props = System.getProperties();
    props.put("mail.smtp.starttls.enable", "True"); // added this line
    props.put("mail.smtp.host", host);
    props.put("mail.smtp.user", from);
    props.put("mail.smtp.password", fromPwd);
    props.put("mail.smtp.port", port);
    props.put("mail.smtp.auth", "true");
    #sql { insert into t1 (c1) values ('1'||:host)};
    Session session = Session.getDefaultInstance(props, null);
    MimeMessage message = new MimeMessage(session);
    message.setFrom(new InternetAddress(from));
    #sql { insert into t1 (c1) values ('2')};
    InternetAddress[] toAddress = new InternetAddress[1];
    // To get the array of addresses
    for( int i=0; i < toAddress.length; i++ ) { // changed from a while loop
    toAddress[i] = new InternetAddress(to);
    //System.out.println(Message.RecipientType.TO);
    for( int i=0; i < toAddress.length; i++) { // changed from a while loop
    message.addRecipient(Message.RecipientType.TO, toAddress);
    if (cc!=null) {
    InternetAddress [] ccAddress = new InternetAddress[1];
    for(int j=0;j<ccAddress.length;j++){
    ccAddress[j] = new InternetAddress(cc);
    for (int j=0;j<ccAddress.length;j++){
    message.addRecipient(Message.RecipientType.CC, ccAddress[j]);
    message.setSubject(subject);
    message.setText(messageBody);
    message.saveChanges();
    #sql { insert into t1 (c1) values ('3')};
    Enumeration en = message.getAllHeaderLines();
    String token;
    while(en.hasMoreElements()){
    token ="E:"+en.nextElement().toString();
    #sql { insert into t1 (c1) values (:token)};
    token ="ConTyp:"+message.getContentType();
    #sql { insert into t1 (c1) values (:token)};
    token = "Encod:"+message.getEncoding();
    #sql { insert into t1 (c1) values (:token)};
    token = "Con:"+message.getContent();
    #sql { insert into t1 (c1) values (:token)};
    Transport transport = session.getTransport("smtp");
    #sql { insert into t1 (c1) values ('3.1')};
    transport.connect(host, from, fromPwd);
    #sql { insert into t1 (c1) values ('3.2')};
    transport.sendMessage(message, message.getAllRecipients());
    #sql { insert into t1 (c1) values ('3.3')};
    transport.close();
    #sql { insert into t1 (c1) values ('4')};
    catch(Exception e){
    e.printStackTrace();
    String ex= e.toString();
    try{
    #sql { insert into t1 (c1) values (:ex)};
    catch(Exception e1)
    Edited by: user12050615 on Jan 16, 2012 12:18 AM

    Hello,
    Thanks for the reply. Actually I have seen that post before creating this thread. I thought that I could make use of java mail to work around this problem. I created a java class that succesfully sends mail to SSL host. I tried to call this java class from pl-sql through java stored procedure. That did not work
    So, is this not supported in Oracle ? Please note that I have tested this in both 9i and 11g , in both the versions I got the error. You can refer to the code in the above post.
    Thanks
    Srikanth
    Edited by: user12050615 on Jan 16, 2012 12:17 AM

  • Issue with same R/3 system ID - Client in XI landscape

    Hi there,
    <b>WHAT:</b>
    We are experiencing a serious problem in our XI landscape.
    The source of the problem is due to different SAP systems within the same landscape which have the same system ID and client nr. Our customer has divided its business into internal companies which use different SAP boxes/instances but share the same XI environment.
    <b>HOW:</b>
    All systems have been defined in SLD as technical and business systems. This is an example how it looks like in SLD:
    DEV on business-system1; client 200
    DEV on business-system2; client 200
    PRD on business-system5; client 400
    PRD on business-system6; client 400
    <b>WHEN:</b>
    When we try to add a second business system/service (in the integration-builder directory) with the same combination of System ID (i.e. DEV) and  client nr. (e.g. 200), XI does not allow you to activate this second system.
    The error message we get; There's already another system configured in the directory with this same combination of system ID + client.
    Is this behavior normal? Why is the integration-builder/directory looking at the technical details of the configured systems in SLD?
    Is this a feature or a bug?
    Please send me any feedback or some clues how to figure out what's going on here, tnx all.
    Rob.
    Message was edited by: Roberto Viana

    > Basically we can have only one BS as the attributes
    > are same but installed on different TS?Is there any
    > specific need that you want two BS?Can you tell
    > wether having one BS will cause any problem for your
    > interfaces?If not,Have one BS and that should be fine.
    Sravya,
    Actually that's the situation we have now; we have two different BS (in this case with the same attributes: SYSTEM ID=DEV and CLIENT=200.) installed on top of two different TS's. This configuration is not accepted by the Directory in the configuration builder. As I said before when you try to add the second BS with the same attributes, XI generates an error that does not allow you to activate the change list.
    So these two BS's have to be configured in SLD as different BS's, otherwise XI would not be able to correctly do the technical routing.
    What I dont understand is why is XI looking at technical data from this systems at configuration time ? As far I know at configuration time XI is only suppose to work with data declared in the business system information and not at technical details like system ID's and client numbers. These kind of information belongs to the SLD layer and not at configuration level.
    <i>Please correct me if Im wrong, tnx.</i>
    R.
    Message was edited by: Roberto Viana

Maybe you are looking for