'Documents'에 해당되는 글 81건

  1. 2014.03.08 Making erchef
  2. 2014.03.07 Tcp ErLang
  3. 2014.03.04 How do I enable core dumps for everybody
  4. 2013.10.10 Usergrid-stack의 사용컴포넌트 요약
  5. 2012.09.20 성능데이터의 구분
  6. 2012.09.19 문서화..
  7. 2012.09.19 Java Launcher만들기
  8. 2012.09.14 님의 침묵
  9. 2012.09.05 Enterprise Architecture(MSDN에서)
  10. 2012.03.21 OOM을 위한 처리옵션

Making erchef

Erlang 2014. 3. 8. 15:53

Making erchef


Opscode publish the new Chef server called erchef. It is build from erlang.

This blog is a log of making erchef on Ubuntu 12.04.1.

1st. erlang install

A default ubuntu erlang apt-package is a little bit old. I compile from the source.

% sudo apt-get install make gcc libncurses5-dev libssl-dev \
  libssl1.0.0 openssl libstdc++6 libstdc++6-4.6-dev
% curl -O http://www.erlang.org/download/otp_src_R15B02.tar.gz
% gzip -dc otp_src_R15B02.tar.gz | tar xvf -
% cd otp_src_R15B02
% ./configure --prefix=/usr/local/ && make
% sudo make install

Next, compile the reber(dependency management tool).

% git clone git://github.com/basho/rebar.git
% cd rebar
% ./bootstrap
% sudo cp -p rebar /usr/local/bin/

2. make

make.

% git clone git://github.com/opscode/erchef.git
% cd erchef
% make rel

That’s it. easy.

I was just tought.

until launching erchef....

3. creating app.config

A configuration file called app.config is required to launching erchef. However, that repository does not contain app.config.

After several googling, I found the omnibus-chef repository.

% sudo apt-get install ruby-bundler rake
% git clone git://github.com/opscode/omnibus-chef.git
% cd omnibus-chef
% bundle install
% mv omnibus.rb.example omnibus.rb
% sudo CHEF_GIT_REV=10.14.4 rake projects:chef-server

But I could not launch this omnibus-chef-server (sorry, I forgot why it could not). So, I use only a attribute and template to build app.config.

Here is the app.config I made after a lot of, a lot of, trial and error.

There are some notes to create app.config.

  • use relative path to choose log directory. not Absolute.
  • I use ‘/’ as rabbitmq directory. A guest can read/write to there. But it should be change.
  • should be change postgres dbname and password.

4. Running erchef

OK. now I got the shiny app.config. Let’s launch erchef!

Install dependency.
(solr does not required?)
% apt-get install postgres rabbitmq-server openjdk-7-jre solr-common solr-jetty

Create log directory.
% mkdir -p log/chef-server/erchef/

Use postgres schema (I cutting corner to use postgres user).
% sudo -u postgres createdb opscode_chef
% sudo -u postgres psql opscode_chef -f deps/chef_db/priv/pgsql_schema.sql

Create chef certificate.
% sudo escript bin/bootstrap-chef-server
client <<"admin">> created. Key written to
<<"/etc/chef-server/admin.pem">>
client <<"chef-validator">> created. Key written to
<<"/etc/chef-server/chef-validator.pem">>
client <<"chef-webui">> created. Key written to
<<"/etc/chef-server/chef-webui.pem">>
environment '_default' created

Place app.config under the erchef/etc.
% mv ~/app.config etc

Launch erchef.
% sudo bin/erchef start

Confirm.
% sudo bin/erchef ping
pong

Yah. erchef started. finally.

next, configure chef-client.

% knife configure -i
Overwrite /home/ubuntu/.chef/knife.rb? (Y/N) Y
Please enter the chef server URL: [http://blah.example:4000]   http://localhost:4000
Please enter a clientname for the new client: [ubuntu] user1  <-- !Another user!
Please enter the existing admin clientname: [chef-webui]
Please enter the location of the existing admin client's private key:
[/etc/chef/webui.pem] /etc/chef-server/chef-webui.pem
Please enter the validation clientname: [chef-validator]
Please enter the location of the validation key:
[/etc/chef/validation.pem] /etc/chef-server/chef-validator.pem
Please enter the path to a chef repository (or leave blank):
Creating initial API user...
Created client[user1]
Configuration file written to /home/ubuntu/.chef/knife.rb

That’s it.

% knife client list
admin
chef-validator
chef-webui
user1

5. However...

I can get list, role, node. However, I still could not upload cookbook or rebuild index. If you have any suggestions, please let me know.


'Erlang' 카테고리의 다른 글

ErLang/OTP  (0) 2014.03.10
Tcp ErLang  (0) 2014.03.07
Posted by sjokim
,

Tcp ErLang

Erlang 2014. 3. 7. 19:41

일단 양해를 구할 것이 있다. 나는 Erlang을 위한 적절한 코드 하이라이터를 아는게 없어서 아래 코드부분에서 검은건 화면이요 하얀건 글씨다. 코드와 주석은 우리 마음속에서 잘 구분해 보도록 하자.


일반적으로 인터넷 상의 함수형 언어에 대한 설명들에서 내가 마음에 들지 않는 부분은 소개되는 코드들 중 실 생활에서 사용 할만한 코드가 거의 없다는 점이다. 아마 함수형 언어들이 현장에서 쓰이는 빈도가 적다보니 대부분 공부용 코드 이외의 것을 보기 어려운 것이 아닌가하는 생각이 든다. 그래서 나는 함수형 언어도 현장에서 써먹을 수 있다는 증거(?)를 블로그에 남기고자 잘 써먹고 있는 TCP 서버 코드를 잘라다가 소개하기로 했다. 이 서버의 디자인은 기본적으로 http://20bits.com/article/erlang-a-generalized-tcp-server 글에 소개된 간단한 OTP 기반 TCP서버 디자인에 기초하고 있다.


이 코드는 OTP gen_server에 기초를 두고 있고, 소켓 단위로 프로세스가 분리되어 비동기적인 작동을 한다. 아래 코드는 작동하는 서버에서 일부분만 잘라낸 것이어서 그 자체로는 아무 작동도 하지 못한다. 무언가 구상한 것이 있다면 이 서버 코드의 파편에 살을 붙여 멋진 애플리케이션으로 살려보기 바란다 :) 참고로 이 코드는 내가 짠 캐시 서버의 일부다. 그 서버에는 지금의 코드보다 더 철저한 애러 핸들러와 잘 작동하는 데이터 파서, gen_server를 통한 ETS 및 데이터 조회, 캐시 삭제를 하는 코드가 포함되어 있다.


-module(tcp_network).
-behaviour(gen_server).
-export([init/1, handle_call/3, handle_cast/2, handle_info/2, terminate/2, code_change/3]).

-define(TCP_OPTIONS, [binary, {active, once}, {recbuf, 4096},  {reuseaddr, true}]).
%% ====================================================================
%% API functions
%% ====================================================================
-export([open/1]).

open(Port) ->
    gen_server:start_link({local,?MODULE}, ?MODULE, Port, []).

%% ====================================================================
%% Behavioural functions 
%% ====================================================================

%% init/1
init(Port) ->
    case gen_tcp:listen(Port, ?TCP_OPTIONS) of
        {ok, Listen} -> 
            %% server state가 만들어진다
            %% {tcp listen pid, tcp listen}의 튜플이다
            {ok, accept({self(), Listen})};
        {error, Reason} ->
            {stop, Reason}
    end.

%% async cast 다음 소켓에 대한 accept_loop를 준비시킨다
%% handle_cast, accept, accept_loop의 3개 함수는 서로재귀 상태에 있다
%% 서로재귀 함수임에도 accept_loop는 나머지 함수와는 다른 프로세스에 있다
handle_cast({accepted}, State) -> {noreply, accept(State)}.

handle_call(_Request, _From, State) -> {noreply, State}.
handle_info(_Info, State) -> {noreply, State}.
terminate(_Reason, _State) -> ok.
code_change(_OldVsn, State, _Extra) -> {ok, State}.

%% ====================================================================
%% Internal functions
%% ====================================================================

%% 다음 소켓 연결 요청을 처리할 프로세스를 만들어 대기시킨다
accept({Server, Listen}) ->
    proc_lib:spawn_link(fun() -> accept_loop(Server, Listen) end),
    {Server, Listen}.

accept_loop(Server, Listen) ->
    case gen_tcp:accept(Listen) of
        {ok, Socket} ->
            %% 소켓 요청이 받아진 경우
            %% cast에 다음 소켓에 대해 준비할 것을 요청한다
            gen_server:cast(Server, {accepted}),
            
            %% 이 프로세스는 소켓 요청 대기를 끝냈다 
            %% 소켓 데이터 처리를 위한 재귀 루프로 들어간다
            receiver(Socket);
        {error, _Reason} ->
            %% 소켓 요청에 실패한 경우
%% 이 예에서는 일단 다음 소켓에 대한 준비를 요청한다
%% 상황에 따라 이 오류를 처리할 cast를 따로 준비해야 할 수 있다
gen_server:cast(Server, {accepted}),

%% ...... %% 그 밖에 필요한 오류 처리절차 %% ......

%% 처리가 끝났다면 프로세스를 종료 exit(normal) end. %% receiver는 한 소켓을 처리한다 %% receiver와 이를 호출한 accept_loop는 같은 프로세스다 receiver(Socket) -> receive {tcp, Socket, Bin} -> %% Bin은 TCP로 들어온 바이너리 데이터 %% ...... %% 데이터 처리에 필요한 작업들 %% ...... %% 서버는 {active, once} 옵션으로 throttle이 조정되고 있다 %% 작업이 끝났다면 다음번 데이터를 받도록 {active, once} 상태로 돌아간다 inet:setopts(Socket, [{active, once}]), %% 다음 데이터를 받는 재귀 receiver(Socket); {tcp_closed, Socket} -> %% 소켓이 닫힌다면 프로세스 종료 exit(normal) end.


한 소켓으로 들어온 각각의 요청을 비동기적으로 처리하고자 한다면 receiver에서 받은 데이터를 처리하는 프로세스들을 추가로 생성하는 것을 고려해볼 수 있을 것이다. 특히 각 요청들이 서로간에 처리시간이 상이하여 동기적인 처리가 비효율적이고 client가 비동기적인 response를 수용할 수 있는 경우라면 receiver에서 한번 더 worker를 spawn해서 처리를 나누는 것은 필수적이라 할 수 있다. 또한 작은 작업 단위로 프로세스를 나누는 것은 멀티코어 CPU의 자원을 활용하도록 하는 Erlang의 최적화 전략이라는 것도 명심하자. 일반적으로 요청의 처리는 적절한 수준으로 쪼개는 것이 좋다1.


이 코드는 TPKT 같은 헤더가 없으며, 데이터는 recbuf에서 잘린다. TPKT로 길이를 알려주면서 통신하고 싶다면 TCP_OPTIONS에 {packet, tpkt} 튜플을 옵션으로 넣으면 된다. 

  1. 적절하다는 표현이 애매하긴 하다. 근데 이건 정량이 없는 문제라서 그냥 해보고 결정하는 수밖에 없다. [본문으로]

'Erlang' 카테고리의 다른 글

ErLang/OTP  (0) 2014.03.10
Making erchef  (0) 2014.03.08
Posted by sjokim
,
  


How do I enable core dumps for everybody

http://www.akadia.com/services/ora_enable_core.html

Overview

In most Linux Distributions core file creation is disabled by default for a normal user. However, it can be necessary to enable this feature for an application (e.g. Oracle). For example, if you encounter an ORA-7445 error in Oracle, then it must be possible to write a core file for the user 쳍racle�.

To enable writing core files you use the ulimit command, it controls the resources available to a process started by the shell, on systems that allow such control.

If you try to enable writing core files, usually you run in the following problem. Normally SSH is used to logon to the server.

ssh oracle@ora-server
$ ulimit -a

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

Now, try (not as user root) to change the core file size to unlimited

ulimit -c unlimited
-bash: ulimit: core file size: cannot modify limit: Operation not permitted

Solution

  1. Check Environment for ulimit

    The first step is to check, that you don't set ulimit -c 0 in any shell configuration files for this user, for example in $HOME/.bash_profile or $HOME/.bashrc. Uncomment it if you have such an entry.

    #
    # Do not produce core dumps
    #
    # ulimit -c 0
     
  2. Globally enable Core Dumps

    This must be done as user root, usually in /etc/security/limits.conf

    # /etc/security/limits.conf
    #
    #
     Each line describes a limit for a user in the form:
    #
    #
     <domain> <type> <item> <value>
    #
    *  soft  core  unlimited

     
  3. Logoff and Logon again and set ulimit

    ssh oracle@ora-server
    $ ulimit -c
    0

    Try to set the limit as user root first

    su -
    ulimit -c unlimited

    ulimit -c
    unlimited

    Now you can set ulimit also for user oracle

    su - oracle
    ulimit -c unlimited
    ulimit -c
    unlimited

Perhaps the last step number 3 is not necessary, but we have figured out, that this is the way which always work. The core file size limitation is usually also set in different configuration files. If you want to enable cores, you can uncomment them.

In /etc/profile (Redhat)

# No core files by default
# ulimit -S -c 0 > /dev/null 2>&1

In /etc/init.d/functions (Redhat)

# make sure it doesn't core dump anywhere unless requested
# ulimit -S -c ${DAEMON_COREFILE_LIMIT:-0} >/dev/null 2>&1

Now, from this current shell you can generate the core, so check ulimit before.

$ ulimit -a

core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 65536
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 16384
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited


Posted by sjokim
,

Usergrid-stack의 구조와 사용컴포넌트 요약


1.hector v1.1

a high level client for cassandra

https://github.com/hector-client/

- high level, simple object oriented interface to cassandra

- failover behavior on the client side

- connection pooling for improved performance and scalability

- JMX counters for monitoring and management

- configurable and extensible load balancing with three algorithms to choose from: round robin (the default), least active, and a phi-accrural style response time detector

- complete encapsulation of the underlying Thrift API and structs

- automatic retry of downed hosts

- automatic discovery of additional hosts in the cluster

- suspension of hosts for a short period of time after several timeouts

- simple ORM layer that works

- a type-safe approach to dealing with Apache Cassandra's data model


2. common pool v1.5.3

Object pooling 컴포넌트 이다.

- A generic object pool interface that clients and implementors can use to provide easily interchangable pooling implementations

- A toolkit for creating modular object pools.

- Several general purpose pool implementations.

http://commons.apache.org/proper/commons-pool/


3. uuid v3.2.0

MAC 어드레스를 기반으로 uuid를 생성해주는 컴포넌트

http://johannburkard.de/software/uuid/

참고) MAC 주소 확인 방법

Microsoft Windows

ipconfig /all

Solaris

arp `uname -n`

Mac OS X, Linux, BSD, other Unices

ifconfig -a

HP-UX

/usr/sbin/lanscan

Solaris 11

dladm show-phys -m


4. Speed4j v0.9

시간을 측정할때 필요한 기능들을 포함하고 있다.

https://github.com/jalkanen/speed4j


5. metrics v2.1.2

자바 모니터링 툴이다. 자바프로세스 내부에서 주요 메트릭스의 카운터를 수집하여 JMX, STDOUT, HTML등으로 리포팅한다.

https://github.com/codahale/metrics


유사 경쟁 컴포넌트

- Perf4J: Perf4J is a set of utilities for calculating and displaying performance statistics for Java code. http://perf4j.codehaus.org/

- ERMA: ERMA (Extremely Reusable Monitoring API) is an instrumentation API that has been designed to be applicable for all monitoring needs. http://erma.wikidot.com/

- javasimon: Java Simon is a simple monitoring API that allows you to follow and better understand your application. Monitors (familiarly called Simons) are placed directly into your code and you can choose whether you want to count something or measure time/duration. https://code.google.com/p/javasimon/

- Glassbox: The Glassbox troubleshooter is an automated troubleshooting and monitoring agent for Java applications that diagnoses common problems with one-click. http://glassbox.sourceforge.net/glassbox/Home.html

- InfraRED: InfraRED is a tool for monitoring performance of a Java EE application and diagnosing performance problems. It collects metrics about various aspects of an application's performance and makes it available for quantitative analysis of the application. http://infrared.sourceforge.net/versions/latest/


6. cassandra v1.1.6

카산드라는 구글의 빅테이블 칼럼 기반의 데이터 모델과 페이스북으 Dynamo의 분산 모델을 기반으로 만들어져 페이스북에 의해 2008년 아파치 오픈소스에 공개된 분산 데이터베이스이다.

http://cassandra.apache.org/


7. snappy 1.0.4

The snappy-java is a Java port of the snappy http://code.google.com/p/snappy/, a fast C++ compresser/decompresser developed by Google.

http://xerial.org/snappy-java/


8. ning-compress 0.8.4

Ning-compress is a Java library for encoding and decoding data in LZF format, written by Tatu Saloranta (tatu.saloranta@iki.fi)

Data format and algorithm based on original LZF library by Marc A Lehmann. See LZF Format for full description.

https://github.com/ning/compress

9. concurrentlinkedhashmap v1.3

A high performance version of java.util.LinkedHashMap for use as a software cache.

https://code.google.com/p/concurrentlinkedhashmap/


10. antlr v3.2

ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It's widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build and walk parse trees. (텍스트 파서 생성기)

http://www.antlr.org/

11. avro v1.4.0

Apache Avro™ is a data serialization system.

Avro provides:

    - Rich data structures.

    - A compact, fast, binary data format.

    - A container file, to store persistent data.

    - Remote procedure call (RPC).

    - Simple integration with dynamic languages. Code generation is not required to read or write data files nor to use or implement RPC protocols. Code generation as an optional optimization, only worth implementing for statically typed languages.

C, C++, C#, Java, PHP, Python, and Ruby

http://avro.apache.org/


12. jetty v6.1

Jetty provides a Web server and javax.servlet container, plus support for SPDY, WebSocket, OSGi, JMX, JNDI, JAAS and many other integrations

http://www.eclipse.org/jetty/


13 json simple v1.1

JSON.simple is a simple Java toolkit for JSON. You can use JSON.simple to encode or decode JSON text.

https://code.google.com/p/json-simple/


14 high-scale-lib v1.1.2

- NonBlockingHashMap - Fast, concurrent, lock-free HashMap. Linear scaling to 768 CPUs.

- NonBlockingHashMapLong - Same as above, but using primitive 'long' keys

- NonBlockingHashSet - A Set version of NBHM

- NonBlockingSetInt - A fast fully concurrent BitVector

- Counter - A simple counter that scales linearly even when extremely hot.

Most simple counters are either unsynchronized (hence drop counts, generally

really badly beyond 2 cpus), or are normally lock'd (hence bottleneck in the

5-10 cpu range), or might use Atomic's (hence bottleneck in the 25-50 cpu

range).

https://github.com/stephenc/high-scale-lib


15. snaptree v0.1

a concurrent AVL tree with fast

cloning, snapshots, and consistent iteration. It is described in

the paper "A Practical Concurrent Binary Search Tree", by N. Bronson,

J. Casper, H. Chafi, and K. Olukotun, published in PPoPP'10.

https://github.com/nbronson/snaptree/


16. httpclient

The Apache HttpComponents™ project is responsible for creating and maintaining a toolset of low level Java components focused on HTTP and associated protocols.

http://hc.apache.org


17. common logging v1.1.1

When writing a library it is very useful to log information. However there are many logging implementations out there, and a library cannot impose the use of a particular one on the overall application that the library is a part of.

http://commons.apache.org/proper/commons-logging/

18. common collections v3.2.1

Commons-Collections seek to build upon the JDK classes by providing new interfaces, implementations and utilities. There are many features, including:

- Bag interface for collections that have a number of copies of each object

- BidiMap interface for maps that can be looked up from value to key as well and key to value

- MapIterator interface to provide simple and quick iteration over maps

- Transforming decorators that alter each object as it is added to the collection

- Composite collections that make multiple collections look like one

- Ordered maps and sets that retain the order elements are added in, including an LRU based map

- Reference map that allows keys and/or values to be garbage collected under close control

- Many comparator implementations

- Many iterator implementations

- Adapter classes from array and enumerations to collections

- Utilities to test or create typical set-theory properties of collections such as union, intersection, and closure

http://commons.apache.org/proper/commons-collections/


19. common io v2.4

Commons IO is a library of utilities to assist with developing IO functionality.

http://commons.apache.org/proper/commons-io/


20. common cli v1.2

The Apache Commons CLI library provides an API for parsing command line options passed to programs. It's also able to print help messages detailing the options available for a command line tool.

http://commons.apache.org/proper/commons-cli/


20. common beanutils v1.8.3

Most Java developers are used to creating Java classes that conform to the JavaBeans naming patterns for property getters and setters. It is natural to then access these methods directly, using calls to the corresponding getXxx and setXxx methods. However, there are some occasions where dynamic access to Java object properties (without compiled-in knowledge of the property getter and setter methods to be called) is needed.

http://commons.apache.org/proper/commons-beanutils/


21. zookeeper v3.4.5

ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications. Each time they are implemented there is a lot of work that goes into fixing the bugs and race conditions that are inevitable. Because of the difficulty of implementing these kinds of services, applications initially usually skimp on them ,which make them brittle in the presence of change and difficult to manage. Even when done correctly, different implementations of these services lead to management complexity when the applications are deployed.

http://zookeeper.apache.org/


22. lucene v3.0.3

Apache Lucene is a high-performance, full-featured text search engine library written entirely in Java. It is a technology suitable for nearly any application that requires full-text search, especially cross-platform

http://lucene.apache.org/core/


23. JavaMail

The JavaMail API provides a platform-independent and protocol-independent framework to build mail and messaging applications. The JavaMail API is available as an optional package for use with the Java SE platform and is also included in the Java EE platform.

http://www.oracle.com/technetwork/java/javamail/index.html


24. Java Persistence API

자바 퍼시스턴스 API(Java Persistence API,JPA)는 관계형 데이터베이스에 접근하기 위한 표준 ORM 기술을 제공하며, 기존에 EJB에서 제공되던 엔터티 빈(Entity Bean)을 대체하는 기술이다. JPAJSR 220에서 정의된 EJB 3.0 스펙의 일부로 정의가 되어 있지만, JPAEJB 컨테이너에 의존하지 않으며 EJB, 웹 모듈 및 Java SE 클라이언트에서 모두 사용이 가능하다. 또한, JPA는 사용자가 원하는 퍼시스턴스 프로바이더 구현체를 선택해서 사용할 수 있다.(http://ko.wikipedia.org/wiki/JPA )

http://www.oracle.com/technetwork/java/javaee/tech/persistence-jsp-140049.html


25. Java UUID Generator v3.1.2

Java Uuid Generator (JUG) is a pure java UUID generator, that can be used either as a component in a bigger application, or as a standalone command line tool (similar to Unix 'uuidgen'). UUIDs are 128-bit Universally Unique IDentifiers (aka GUID, Globally Unique IDentifier used in Windows world).

http://wiki.fasterxml.com/JugHome


26. hazelcast v 1.9.3.1

Hazelcast is an open source clustering and highly scalable data distribution platform for Java, which is:

- Lightning-fast; thousands of operations/sec.

- Fail-safe; no losing data after crashes.

- Dynamically scales as new servers added.

- Super-easy to use; include a single ja

http://www.hazelcast.com/


27. curator

Curator is a set of Java libraries that make using Apache ZooKeeper much easier.

http://curator.incubator.apache.org/

http://netflix.github.io/curator/


28. jackson

Inspired by the quality and variety of XML tooling available for the Java platform (StAX, JAXB, etc.), the Jackson is a multi-purpose Java library for processing JSON. Jackson aims to be the best possible combination of fast, correct, lightweight, and ergonomic for developers.

http://jackson.codehaus.org/


29. spring

스프링 프레임워크(Spring Framework)?자바 플랫폼을 위한 오픈소스 애플리케이션 프레임워크로서 간단히 스프링(Spring)이라고도 불린다. 동적인 웹 사이트를 개발하기 위한 여러 가지 서비스를 제공하고 있다.

http://spring.io/


30. snakeyml v1.8

YAML is a data serialization format designed for human readability and interaction with scripting languages.

SnakeYAML is a YAML parser and emitter for the Java programming language.

https://code.google.com/p/snakeyaml/


31. jsoup v1.6

jsoup is a Java library for working with real-world HTML. It provides a very convenient API for extracting and manipulating data, using the best of DOM, CSS, and jquery-like methods.

http://jsoup.org/


32. perf4j v0.9.12

Perf4J is a set of utilities for calculating and displaying performance statistics for Java code. For developers who are familiar with logging frameworks such as log4j or logback, an analogy helps to describe Perf4J

http://perf4j.codehaus.org/


33. aspectj v1.6

AspectJ is an aspect-oriented programming (AOP) extension created at PARC for the Java programming language. It is available in Eclipse Foundation open-source projects, both stand-alone and integrated into Eclipse. AspectJ has become a widely-used de facto standard for AOP by emphasizing simplicity and usability for end users. It uses Java-like syntax, and included IDE integrations for displaying crosscutting structure since its initial public release in 2001.(http://en.wikipedia.org/wiki/AspectJ)

http://eclipse.org/aspectj/


34. cglib v2.2.2

CGLIB는 코드 생성 라이브러리로서(Code Generator Library) 런타임에 동적으로 자바 클래스의 프록시를 생성해주는 기능을 제공한다. (http://blog.daum.net/bacsumu/13042134 )

http://cglib.sourceforge.net/


35. Jline v0.9.94

JLine is a Java library for handling console input. It is similar in functionality to BSD editline and GNU readline. People familiar with the readline/editline capabilities for modern shells (such as bash and tcsh) will find most of the command editing features of JLine to be familiar.

http://jline.sourceforge.net/


36. netty v3.2.7

Netty is a NIO client server framework which enables quick and easy development of network applications such as protocol servers and clients. It greatly simplifies and streamlines network programming such as TCP and UDP socket server.

http://netty.io/


37 cassandra thrift

The purpose of using Thrift in Cassandra was to allow portable (across programming languages) access to the database

http://wiki.apache.org/cassandra/ThriftInterface


38. thrift

The Apache Thrift software framework, for scalable cross-language services development, combines a software stack with a code generation engine to build services that work efficiently and seamlessly between C++, Java, Python, PHP, Ruby, Erlang, Perl, Haskell, C#, Cocoa, JavaScript, Node.js, Smalltalk, OCaml and Delphi and other languages.

http://thrift.apache.org/


39. apache commons lang v2.5

The standard Java libraries fail to provide enough methods for manipulation of its core classes. Apache Commons Lang provides these extra methods.

http://commons.apache.org/proper/commons-lang/


39. apache commons codec v2.5

Apache Commons Codec (TM) software provides implementations of common encoders and decoders such as Base64, Hex, Phonetic and URLs.

http://commons.apache.org/proper/commons-codec/


40. apache shiro v 1.2.0

Apache Shiro is a powerful and easy-to-use Java security framework that performs authentication, authorization, cryptography, and session management. With Shiro’s easy-to-understand API, you can quickly and easily secure any application – from the smallest mobile applications to the largest web and enterprise applications.

http://shiro.apache.org/


41 guava v12.0

The Guava project contains several of Google's core libraries that we rely on in our Java-based projects: collections, caching, primitives support, concurrency libraries, common annotations, string processing, I/O, and so forth.

https://code.google.com/p/guava-libraries/


42 jsr305 v1.3.9

Annotations for Software Defect Detection

http://jcp.org/en/jsr/detail?id=305


43. SLF4J v1.6.1

jdklog4j등과 통합하여 사용할 수 있는 로깅프레임웍이다.

http://www.slf4j.org/manual.html


44 log4j v1.2.16

The Apache Logging Services Project creates and maintains open-source software related to the logging of application behavior and released at no charge to the public.

http://logging.apache.org/


45. Gradual migration to SLF4J from Jakarta Commons Logging (JCL)

jcl-over-slf4j.jar

jul-to-slf4j.jar

http://www.slf4j.org/legacy.html


46 mock javamail v1.9

Mock JavaMail comes to rescue. This project takes advantage of pluggability in JavaMail, so that you can send/receive e-mails against the temporary in-memory "mailbox". For example, when this jar is in your classpath, the following code that normally sends e-mail to me actually just sends the e-mail to an in-memory mailbo

https://java.net/projects/mock-javamail


47. hamcrest v1.3

Hamcrest is a library of matchers, which can be combined in to create flexible expressions of intent in tests.

https://github.com/hamcrest/JavaHamcrest


48 amber v0.22

Apache Oltu(amber) is an OAuth protocol implementation in Java.

http://oltu.apache.org/download.html

http://oauth.net/2/

OAuth 2.0

OAuth 2.0 logoOAuth 2.0 is the next evolution of the OAuth protocol which was originally created in late 2006. OAuth 2.0 focuses on client developer simplicity while providing specific authorization flows for web applications, desktop applications, mobile phones, and living room devices. This specification is being developed within the IETF OAuth WG and is based on the OAuth WRAP proposal.


49. Jettison v1.2

Jettison is a collection of Java APIs (like STaX and DOM) which read and write JSON. This allows nearly transparent enablement of JSON based web services in services frameworks like CXF or XML serialization frameworks like Xstream.

http://jettison.codehaus.org/


50. stax v1.0.1

This is the homepage for the StAX Reference Implementation (RI). StAX is a standard XML processing API that allows you to stream XML data from and to your application. This StAX implementation is the standard pull parser implementation for JSR-173 specification.

http://stax.codehaus.org/


51. jcloud

Apache jclouds® is an open source library that helps you get started in the cloud and utilizes your Java or Clojure development skills. The jclouds API gives you the freedom to use portable abstractions or cloud-specific features.

http://jclouds.incubator.apache.org/


52. google guice v3.0

Put simply, Guice alleviates the need for factories and the use of new in your Java code. Think of Guice's @Inject as the new new. You will still need to write factories in some cases, but your code will not depend directly on them. Your code will be easier to change, unit test and reuse in other contexts.

https://code.google.com/p/google-guice/


53. rocoto v6.1

Rocoto is a small collection of reusable Modules for Google Guice to make easier the task of loading java.util.Properties by reading configuration files.

http://99soft.github.io/rocoto/


54 gson v2.2

Gson is a Java library that can be used to convert Java Objects into their JSON representation. It can also be used to convert a JSON string to an equivalent Java object. Gson can work with arbitrary Java objects including pre-existing objects that you do not have source-code of.

https://code.google.com/p/google-gson/


55. jython v2.5

Python for the java platform

http://www.jython.org/


56. jsr250

Common Annotations for the JavaTM Platform

http://en.wikipedia.org/wiki/JSR_250



Posted by sjokim
,
모니터링 데이터는 다양한 관점으로 분류할 수 있다.
이러한 분류는 모니터링을 체계적으로 하는데 도움이 된다.

※ 성능데이터 유형

1. Counter - 숫자로 떨어지는 데이터이다. ex 윈도우 성능 카운터

2. Snapshot - 덤프, 리스트, 환경변수등과 같이 어떤 상태를 조회하는 데이터

3. Trace - 수행내역을 추적하는 데이터(서비스 프로파일링)


※ 표현 방법

    - 스코어

    - 이퀄라이저

    - 실시간(라인/막대)

    - 24시간(라인/막대)


※ 측정 간격

    - 실시간(수초~) 

    - 분단위(1분/5분/10분)

    - 시간

    - 일/월/년


위치에 따른 성능구분

    1. Internal Performance

         - Process Performance

         - System Performance

    2. External Performance 


※ 자원의 유형

    - S/W Resource

    - H/W Resource 


※ 성능관리 대상

    1. Resource

    2. Service

    3. User 


※ 이해당사자 

    1. Business

    2. Manager

    3. Administrator


※ 시스템 생명주기

    1. Testing Stage

    2. Openning Stage

    3. Steady Stage


Posted by sjokim
,

문서화..

카테고리 없음 2012. 9. 19. 14:25

개발을 하다보면 문서를 만들어야 할때가 많다. 

특히 개발 산출물은 코드와 동일한 가치를 갖는데 심지어 파일럿 프로젝트에서는 코드보다 더 중요하다. 이것을 기반으로 개발이나 유지보수가 진행되기 때문에 혹시 개발과정에서 만들지 않았더라도 개발관련 문서는 만드는 것을 권장한다.


그런데 모든 프로젝트에서 개발 문서가 중요한 것은 아니다. 중요하긴 한데 더 중요한 문서가 있다는 것이다.


패키지를 개발하는 프로젝트에서는 매뉴얼이 더 중요하다. 개발에 대한 내용은 대충 위키나 코드에 주석으로 처리하여도 되지만 매뉴얼은 정말 열심히 작성해야 한다. 

이것이 소위말하는 SI프로젝트와 패키지 프로젝트의 가장 큰 차이점이다.


그러면 패키지 프로젝트에서 필요한 문서는 뭐가 있을까/


  • 레퍼런스 매뉴얼
  • 유저스 매뉴얼
  • 설치 매뉴얼
  • 릴리즈 노트
  • 퀵스타트 매뉴얼
  • 소개 PPT
  • 화이트페이퍼 - 제품 포지션, 핵심기술, 핵심 가치
  • 기술 브로셔
  • 영업 브로셔
  • 제품 도입 스펙
  • 타제품과 비교자료


Posted by sjokim
,

자바프로그램을 만들다 보면 정말 많은 컴포넌트를 사용하게 된다. 그런데 이러한 컴포넌트를 클래스 패스에 설정해주고 실행하는것은 여간 귀찮은 일이다. 그래서 간단하게 특정 디렉토리에 ,jar 파일들을 넣어두고 자동으로 클래스 패스가 처리된다면 편리하것이다. 이것을 위해 런처를 사용한다.

런처를 만들기 위해서는 다음과 같은 로직을 갖는다.


1. URLClassLoader 초기화

2. 리플렉션에 의한 main() 함수 호출


public static void main(String[] args) throws  Throwable  {


    URL[] jarfiles = ...

   ClassLoader   classloader 

         = new  URLClassLoader  ( jarfiles, Launcher.class.getClassLoader());

    Thread.currentThread().setContextClassLoader(cl);


    String mainclass= System.getProperty("main");

    

     /**

      * 리플렉션에 의함 원래의 메인함수 호출

      */

    Class c = Class.forName(mainclass, true,  classloader );

    Class[]  argc = { String[].class };

    Object[] argo = { args};

    

    java.lang.reflect.Method method = c.getDeclaredMethod("main", argc);

    try {

          method.invoke(null,argo);

     } catch (InvocationTargetException e) {

          throw e.getTargetException();

     }

}


URL클래스 로더와 리플렉션을 이용하면 간단하게 런처를 만들수 있다.

 

'코딩과 개발' 카테고리의 다른 글

The GNU C Library Reference Manual  (0) 2009.10.30
멀티 쓰레드 패턴  (0) 2009.10.26
Eclipse RCP 따라하기  (1) 2009.10.26
Java ClassLoader 이해하기  (0) 2009.10.26
Posted by sjokim
,

님의 침묵

카테고리 없음 2012. 9. 14. 14:03


님은 갔습니다. 아아 사랑하는 나의 님은 갔습니다.

 푸른 산빛을 깨치고 단풍나무 숲을 향하여 난 작은 길을

걸어서 차마 떨치고 갔습니다.

황금의 꽃같이 굳고 빛나던 옛 맹세는 차디찬 티 끝이 되어서

한숨의 미풍에 날아갔습니다.

 날카로운 첫 키스의 추억은 나의 운명의 지침을 돌려놓고

뒷걸음쳐서 사라졌습니다.

 나는 향기로운 님의 말소리에 귀먹고 꽃다운 님의 얼굴에

눈 멀었습니다.

 사랑도 사람의 일이라 만날 때에 미리 떠날 것을 염려하고

경계하지 아니한 것은 아니지만, 이별은 뜻밖의 일이 되고

놀란 가슴은 새로운 슬픔에 터집니다.

 그러나 이별은 쓸데없는 눈물의 원천을 만들고 마는 것은

스스로 사랑을 깨치는 것인 줄 아는 까닭에 걷잡을 수 없는

슬픔의 힘을 옮겨서 정수박이에 들어부었습니다.

우리는 만날 때에 떠날 것을 염려하는 것과 같이 떠날 때에

만날 것을 믿습니다.

아아, 님은 갔지마는 나는 님을 보내지 아니하였습니다.

 제 곡조를 못 이기는 사랑의 곡조는 님의 침묵을 휩싸고 돕니다. 

                                        

                                         ㅡ만해 한용운ㅡ


Posted by sjokim
,




Enterprise Architecture(MSDN)


Posted by sjokim
,
IBM 
-Xdump:tool:events=throw,filter=java/lang/OutOfMemoryError,exec="kill -9 %pid"

JRockit
-XXexitOnOutOfMemory

SUNVM
UNIX: -XX:OnOutOfMemoryError="kill -9 %p"
Wind: -XX:OnOutOfMemoryError="taskkill /F /PID %p"

Posted by sjokim
,